Nov 25 12:14:08 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 12:14:08 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 12:14:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 12:14:09 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 12:14:10 crc kubenswrapper[4688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 12:14:10 crc kubenswrapper[4688]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 12:14:10 crc kubenswrapper[4688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 12:14:10 crc kubenswrapper[4688]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 12:14:10 crc kubenswrapper[4688]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 12:14:10 crc kubenswrapper[4688]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.384216 4688 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390477 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390515 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390559 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390575 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390590 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390604 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390616 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390629 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390642 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390652 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390664 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390674 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390686 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390699 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390709 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390722 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390734 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390746 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390757 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390767 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390777 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390787 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390797 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390808 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390818 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390828 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390839 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390849 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390884 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390895 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390905 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390915 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390926 4688 feature_gate.go:330] unrecognized feature gate: Example Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390936 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390947 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390957 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390967 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390979 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.390992 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391003 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391013 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391023 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391035 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391047 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391057 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391067 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391077 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391089 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391099 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391114 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391127 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391138 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391150 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391161 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391172 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391183 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391193 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391206 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391216 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391228 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391238 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391249 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391260 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391273 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391284 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391295 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391306 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391317 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391327 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391338 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.391351 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392490 4688 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392570 4688 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392599 4688 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392615 4688 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392632 4688 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392645 4688 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392661 4688 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392693 4688 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392706 4688 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392719 4688 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392733 4688 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392759 4688 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392771 4688 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392783 4688 flags.go:64] FLAG: --cgroup-root="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392796 4688 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392809 4688 flags.go:64] FLAG: --client-ca-file="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392821 4688 flags.go:64] FLAG: --cloud-config="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392832 4688 flags.go:64] FLAG: --cloud-provider="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392843 4688 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392859 4688 flags.go:64] FLAG: --cluster-domain="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392870 4688 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392883 4688 flags.go:64] FLAG: --config-dir="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392894 4688 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392907 4688 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392923 4688 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392937 4688 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392949 4688 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392962 4688 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392974 4688 flags.go:64] FLAG: --contention-profiling="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.392987 4688 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393000 4688 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393014 4688 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393027 4688 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393042 4688 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393054 4688 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393067 4688 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393079 4688 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393091 4688 flags.go:64] FLAG: --enable-server="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393104 4688 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393122 4688 flags.go:64] FLAG: --event-burst="100" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393136 4688 flags.go:64] FLAG: --event-qps="50" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393148 4688 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393160 4688 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393174 4688 flags.go:64] FLAG: --eviction-hard="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393189 4688 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393204 4688 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393217 4688 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393230 4688 flags.go:64] FLAG: --eviction-soft="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393242 4688 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393253 4688 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393265 4688 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393278 4688 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393290 4688 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393302 4688 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393313 4688 flags.go:64] FLAG: --feature-gates="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393342 4688 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393356 4688 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393368 4688 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393381 4688 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393393 4688 flags.go:64] FLAG: --healthz-port="10248" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393405 4688 flags.go:64] FLAG: --help="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393417 4688 flags.go:64] FLAG: --hostname-override="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393429 4688 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393444 4688 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393456 4688 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393467 4688 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393480 4688 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393491 4688 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393502 4688 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393513 4688 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393560 4688 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393573 4688 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393587 4688 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393599 4688 flags.go:64] FLAG: --kube-reserved="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393611 4688 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393622 4688 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393633 4688 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393645 4688 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393657 4688 flags.go:64] FLAG: --lock-file="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393671 4688 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393684 4688 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393696 4688 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393715 4688 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393726 4688 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393738 4688 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393750 4688 flags.go:64] FLAG: --logging-format="text" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393762 4688 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393775 4688 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393787 4688 flags.go:64] FLAG: --manifest-url="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393798 4688 flags.go:64] FLAG: --manifest-url-header="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393814 4688 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393826 4688 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393841 4688 flags.go:64] FLAG: --max-pods="110" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393853 4688 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393865 4688 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393880 4688 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393893 4688 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393905 4688 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393917 4688 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393929 4688 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393958 4688 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393971 4688 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393984 4688 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.393996 4688 flags.go:64] FLAG: --pod-cidr="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394008 4688 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394031 4688 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394043 4688 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394056 4688 flags.go:64] FLAG: --pods-per-core="0" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394069 4688 flags.go:64] FLAG: --port="10250" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394081 4688 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394093 4688 flags.go:64] FLAG: --provider-id="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394105 4688 flags.go:64] FLAG: --qos-reserved="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394117 4688 flags.go:64] FLAG: --read-only-port="10255" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394130 4688 flags.go:64] FLAG: --register-node="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394142 4688 flags.go:64] FLAG: --register-schedulable="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394155 4688 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394179 4688 flags.go:64] FLAG: --registry-burst="10" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394190 4688 flags.go:64] FLAG: --registry-qps="5" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394203 4688 flags.go:64] FLAG: --reserved-cpus="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394214 4688 flags.go:64] FLAG: --reserved-memory="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394228 4688 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394241 4688 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394252 4688 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394264 4688 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394276 4688 flags.go:64] FLAG: --runonce="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394288 4688 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394300 4688 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394314 4688 flags.go:64] FLAG: --seccomp-default="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394326 4688 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394338 4688 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394350 4688 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394363 4688 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394375 4688 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394387 4688 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394399 4688 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394413 4688 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394425 4688 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394438 4688 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394450 4688 flags.go:64] FLAG: --system-cgroups="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394463 4688 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394483 4688 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394494 4688 flags.go:64] FLAG: --tls-cert-file="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394506 4688 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394552 4688 flags.go:64] FLAG: --tls-min-version="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394566 4688 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394578 4688 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394589 4688 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394601 4688 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394614 4688 flags.go:64] FLAG: --v="2" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394631 4688 flags.go:64] FLAG: --version="false" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394647 4688 flags.go:64] FLAG: --vmodule="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394663 4688 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.394676 4688 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395879 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395903 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395917 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395929 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395943 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395953 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395967 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395979 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.395990 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396001 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396012 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396022 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396033 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396044 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396055 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396065 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396079 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396093 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396105 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396118 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396129 4688 feature_gate.go:330] unrecognized feature gate: Example Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396140 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396151 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396162 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396174 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396185 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396195 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396205 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396216 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396228 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396239 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396250 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396260 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396270 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396293 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396306 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396317 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396328 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396343 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396356 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396369 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396380 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396392 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396403 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396413 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396423 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396433 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396443 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396453 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396464 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396477 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396491 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396503 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396513 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396564 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396575 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396586 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396596 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396606 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396617 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396627 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396637 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396649 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396659 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396669 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396680 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396690 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396700 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396710 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396720 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.396733 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.396770 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.410110 4688 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.410155 4688 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410233 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410241 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410246 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410250 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410254 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410257 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410261 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410264 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410268 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410272 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410276 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410279 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410283 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410286 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410290 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410293 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410297 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410301 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410304 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410308 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410312 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410317 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410322 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410327 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410334 4688 feature_gate.go:330] unrecognized feature gate: Example Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410339 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410345 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410350 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410355 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410359 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410365 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410371 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410376 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410380 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410385 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410389 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410394 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410398 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410401 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410405 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410410 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410417 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410421 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410425 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410429 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410434 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410439 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410443 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410447 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410451 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410455 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410459 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410462 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410467 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410471 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410474 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410478 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410481 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410485 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410489 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410494 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410498 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410502 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410506 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410509 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410514 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410533 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410538 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410543 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410547 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410551 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.410558 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410676 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410684 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410690 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410695 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410701 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410705 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410710 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410714 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410717 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410722 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410726 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410730 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410733 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410736 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410741 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410744 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410748 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410751 4688 feature_gate.go:330] unrecognized feature gate: Example Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410755 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410759 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410762 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410766 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410771 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410775 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410779 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410783 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410786 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410791 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410795 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410799 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410804 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410808 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410812 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410817 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410823 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410828 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410833 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410838 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410842 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410846 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410851 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410855 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410859 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410864 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410868 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410872 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410879 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410885 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410889 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410894 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410899 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410903 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410908 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410913 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410918 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410923 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410928 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410933 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410937 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410942 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410947 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410951 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410957 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410964 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410969 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410975 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410981 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410987 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410992 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.410998 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.411005 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.411013 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.412249 4688 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.416821 4688 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.416896 4688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.418496 4688 server.go:997] "Starting client certificate rotation" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.418542 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.418756 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-09 11:19:55.803656787 +0000 UTC Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.418876 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 335h5m45.384783175s for next certificate rotation Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.447136 4688 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.452223 4688 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.467877 4688 log.go:25] "Validated CRI v1 runtime API" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.509844 4688 log.go:25] "Validated CRI v1 image API" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.515452 4688 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.524906 4688 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-12-09-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.524965 4688 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.548627 4688 manager.go:217] Machine: {Timestamp:2025-11-25 12:14:10.545882019 +0000 UTC m=+0.655510907 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f59bf0f2-499e-4429-a2b3-013f2c9a413d BootID:ffd64935-041b-437f-a9ed-f5d9731454d0 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:17:14:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:17:14:82 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8e:de:6a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c8:0a:7e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:72:2e:55 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:bf:9b:8c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:38:90:ed:2f:88 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:52:35:44:39:1f:1b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.549040 4688 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.549455 4688 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.549976 4688 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.550194 4688 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.550241 4688 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.550552 4688 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.550566 4688 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.556586 4688 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.556659 4688 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.557693 4688 state_mem.go:36] "Initialized new in-memory state store" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.557822 4688 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.580420 4688 kubelet.go:418] "Attempting to sync node with API server" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.580455 4688 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.580503 4688 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.580553 4688 kubelet.go:324] "Adding apiserver pod source" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.580574 4688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.601016 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.601152 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.601089 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.601257 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.602617 4688 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.605205 4688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.613492 4688 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619645 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619695 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619711 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619724 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619747 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619761 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619774 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619795 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619813 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619826 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619897 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.619911 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.622227 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.623054 4688 server.go:1280] "Started kubelet" Nov 25 12:14:10 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.632346 4688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.633803 4688 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.634816 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.635203 4688 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.651589 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.651677 4688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.651702 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:34:51.243207844 +0000 UTC Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.651746 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1060h20m40.591467165s for next certificate rotation Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.651931 4688 server.go:460] "Adding debug handlers to kubelet server" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.652682 4688 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.653128 4688 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.653196 4688 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.653366 4688 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.653896 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="200ms" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.651129 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.159:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b3ee9434b70c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 12:14:10.623000776 +0000 UTC m=+0.732629684,LastTimestamp:2025-11-25 12:14:10.623000776 +0000 UTC m=+0.732629684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.654772 4688 factory.go:153] Registering CRI-O factory Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.655471 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.655627 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.655730 4688 factory.go:221] Registration of the crio container factory successfully Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.656063 4688 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.656086 4688 factory.go:55] Registering systemd factory Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.656095 4688 factory.go:221] Registration of the systemd container factory successfully Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.656120 4688 factory.go:103] Registering Raw factory Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.656140 4688 manager.go:1196] Started watching for new ooms in manager Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.657565 4688 manager.go:319] Starting recovery of all containers Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.665610 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666183 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666208 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666227 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666245 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666264 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666281 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666299 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666319 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666339 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666358 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666377 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666395 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666416 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666433 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666453 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666475 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666497 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666516 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666565 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666583 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666600 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666619 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666637 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666655 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666675 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666709 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666735 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666764 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666790 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666814 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666836 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666860 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666880 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666898 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666917 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666936 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666954 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666972 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.666990 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667010 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667033 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667052 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667073 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667092 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667112 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667131 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667149 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667168 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667186 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667205 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667224 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667250 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667270 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667289 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667309 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667327 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667347 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667365 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667383 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667402 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667419 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667438 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667456 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667478 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667498 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667516 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667568 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667585 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667603 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667620 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667639 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667656 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667674 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667740 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667764 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667788 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667815 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667839 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667920 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667952 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.667982 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668001 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668032 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668049 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668066 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668085 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668101 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668118 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668135 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668155 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668174 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668192 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668211 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668229 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668247 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668267 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668287 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668304 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668328 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668345 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668363 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668382 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668400 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668433 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668454 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668473 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668493 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668514 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668654 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668678 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668699 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668719 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668739 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668757 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668777 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668795 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668812 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668829 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668852 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668878 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668904 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668928 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668953 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668977 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.668996 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669013 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669031 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669051 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669070 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669087 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669112 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669130 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669147 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669166 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669185 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669205 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669224 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669242 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669259 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669276 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.669295 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671576 4688 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671619 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671644 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671663 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671685 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671704 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671807 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671833 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.671927 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672062 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672150 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672184 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672252 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672275 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672293 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672400 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672426 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672478 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672496 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672598 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672619 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672711 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672739 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672793 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672824 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672877 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672900 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672919 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672970 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.672993 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.673012 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.673064 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.673083 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.673103 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.673159 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.673198 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.673969 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.673996 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674051 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674073 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674091 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674144 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674164 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674182 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674239 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674271 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674346 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674377 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.674441 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.675902 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676724 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676754 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676771 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676789 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676803 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676819 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676836 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676849 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676864 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676877 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676888 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676907 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676917 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676930 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676944 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676958 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676968 4688 reconstruct.go:97] "Volume reconstruction finished" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.676976 4688 reconciler.go:26] "Reconciler: start to sync state" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.684449 4688 manager.go:324] Recovery completed Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.693327 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.694821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.694868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.694882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.696129 4688 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.696159 4688 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.696187 4688 state_mem.go:36] "Initialized new in-memory state store" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.736529 4688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.738541 4688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.738589 4688 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.738613 4688 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.738669 4688 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 12:14:10 crc kubenswrapper[4688]: W1125 12:14:10.739738 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.739929 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.753590 4688 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.754428 4688 policy_none.go:49] "None policy: Start" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.755744 4688 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.755829 4688 state_mem.go:35] "Initializing new in-memory state store" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.838804 4688 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.850956 4688 manager.go:334] "Starting Device Plugin manager" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.851055 4688 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.851084 4688 server.go:79] "Starting device plugin registration server" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.851859 4688 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.851934 4688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.852292 4688 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.852567 4688 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.852583 4688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.855334 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="400ms" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.865682 4688 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.952088 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.954012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.954077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.954096 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:10 crc kubenswrapper[4688]: I1125 12:14:10.954134 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 12:14:10 crc kubenswrapper[4688]: E1125 12:14:10.954759 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.039600 4688 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.039724 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.041209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.041254 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.041267 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.041413 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.041837 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.041920 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.042302 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.042335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.042347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.043258 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.043328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.043348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.043448 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.043708 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.043801 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.044681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.044714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.044725 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.044833 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045061 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045121 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045730 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045830 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045932 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045832 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.045969 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.046331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.046349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.046359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.047630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.047645 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.047683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.047725 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.047743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.047755 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.048012 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.048046 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.049045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.049074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.049088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081431 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081481 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081508 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081557 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081581 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081603 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081620 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081786 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081943 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081974 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.081997 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.082072 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.082142 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.082168 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.155683 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.157945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.158360 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.158425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.158478 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 12:14:11 crc kubenswrapper[4688]: E1125 12:14:11.159157 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183101 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183182 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183222 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183290 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183341 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183383 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183416 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183410 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183451 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183449 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183504 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183519 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183499 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183570 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183428 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183371 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183609 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183605 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183676 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183697 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183725 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183750 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183781 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183785 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183802 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183804 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183863 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.183918 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: E1125 12:14:11.257186 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="800ms" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.383027 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.410219 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.430950 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.441450 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.447663 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:11 crc kubenswrapper[4688]: W1125 12:14:11.530341 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:11 crc kubenswrapper[4688]: E1125 12:14:11.530564 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.559655 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.561388 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.561453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.561464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.561493 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 12:14:11 crc kubenswrapper[4688]: E1125 12:14:11.562310 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Nov 25 12:14:11 crc kubenswrapper[4688]: W1125 12:14:11.618490 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:11 crc kubenswrapper[4688]: E1125 12:14:11.618666 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:11 crc kubenswrapper[4688]: I1125 12:14:11.635648 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:11 crc kubenswrapper[4688]: W1125 12:14:11.773574 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:11 crc kubenswrapper[4688]: E1125 12:14:11.773697 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:11 crc kubenswrapper[4688]: W1125 12:14:11.860700 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7163ff766e6d84e70ef8162f1f5a6f217410a1393e21a2c4302ef71b9310274b WatchSource:0}: Error finding container 7163ff766e6d84e70ef8162f1f5a6f217410a1393e21a2c4302ef71b9310274b: Status 404 returned error can't find the container with id 7163ff766e6d84e70ef8162f1f5a6f217410a1393e21a2c4302ef71b9310274b Nov 25 12:14:11 crc kubenswrapper[4688]: W1125 12:14:11.869583 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-1b82225d75c36cef21450663e752e8bfa86f24e4c3058661d7e6bae84d04f146 WatchSource:0}: Error finding container 1b82225d75c36cef21450663e752e8bfa86f24e4c3058661d7e6bae84d04f146: Status 404 returned error can't find the container with id 1b82225d75c36cef21450663e752e8bfa86f24e4c3058661d7e6bae84d04f146 Nov 25 12:14:11 crc kubenswrapper[4688]: W1125 12:14:11.873027 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e8db99024f3b7ec6db16f5f9a885f6f30ee4c44251d72fba3a6325d706715df9 WatchSource:0}: Error finding container e8db99024f3b7ec6db16f5f9a885f6f30ee4c44251d72fba3a6325d706715df9: Status 404 returned error can't find the container with id e8db99024f3b7ec6db16f5f9a885f6f30ee4c44251d72fba3a6325d706715df9 Nov 25 12:14:11 crc kubenswrapper[4688]: W1125 12:14:11.892240 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b47c34ced7dda0584ab8666f50983a3650475dea76f28ae3dc998f93bbd48036 WatchSource:0}: Error finding container b47c34ced7dda0584ab8666f50983a3650475dea76f28ae3dc998f93bbd48036: Status 404 returned error can't find the container with id b47c34ced7dda0584ab8666f50983a3650475dea76f28ae3dc998f93bbd48036 Nov 25 12:14:11 crc kubenswrapper[4688]: W1125 12:14:11.911854 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-28cc6c23dd1d28a2fd729847fad2e450e319bbb8aac50dd43eabb835d81f6242 WatchSource:0}: Error finding container 28cc6c23dd1d28a2fd729847fad2e450e319bbb8aac50dd43eabb835d81f6242: Status 404 returned error can't find the container with id 28cc6c23dd1d28a2fd729847fad2e450e319bbb8aac50dd43eabb835d81f6242 Nov 25 12:14:12 crc kubenswrapper[4688]: E1125 12:14:12.059229 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="1.6s" Nov 25 12:14:12 crc kubenswrapper[4688]: W1125 12:14:12.188346 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:12 crc kubenswrapper[4688]: E1125 12:14:12.188562 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.363319 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.365183 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.365257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.365275 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.365319 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 12:14:12 crc kubenswrapper[4688]: E1125 12:14:12.366023 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.636317 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.747501 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b47c34ced7dda0584ab8666f50983a3650475dea76f28ae3dc998f93bbd48036"} Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.748927 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e8db99024f3b7ec6db16f5f9a885f6f30ee4c44251d72fba3a6325d706715df9"} Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.750640 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"28cc6c23dd1d28a2fd729847fad2e450e319bbb8aac50dd43eabb835d81f6242"} Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.752311 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1b82225d75c36cef21450663e752e8bfa86f24e4c3058661d7e6bae84d04f146"} Nov 25 12:14:12 crc kubenswrapper[4688]: I1125 12:14:12.753667 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7163ff766e6d84e70ef8162f1f5a6f217410a1393e21a2c4302ef71b9310274b"} Nov 25 12:14:13 crc kubenswrapper[4688]: W1125 12:14:13.149223 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:13 crc kubenswrapper[4688]: E1125 12:14:13.149301 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:13 crc kubenswrapper[4688]: W1125 12:14:13.362584 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:13 crc kubenswrapper[4688]: E1125 12:14:13.362649 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.635500 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:13 crc kubenswrapper[4688]: E1125 12:14:13.660208 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="3.2s" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.759261 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173" exitCode=0 Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.759409 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173"} Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.759680 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.761372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958"} Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.761870 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.761920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.761942 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.768902 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.769069 4688 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b" exitCode=0 Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.769169 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b"} Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.769209 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.770066 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.770098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.770110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.770151 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.770168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.770177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.772321 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2" exitCode=0 Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.772383 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2"} Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.772418 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.773696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.773721 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.773731 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.774991 4688 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a2e8395680eb84a8cfb1d38a48d794b8e8f6b26869f0a026cb4d9c5a1cb2cf69" exitCode=0 Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.775022 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a2e8395680eb84a8cfb1d38a48d794b8e8f6b26869f0a026cb4d9c5a1cb2cf69"} Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.775121 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.776237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.776279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.776297 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.966164 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.967815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.967848 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.967857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:13 crc kubenswrapper[4688]: I1125 12:14:13.967881 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 12:14:13 crc kubenswrapper[4688]: E1125 12:14:13.968421 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Nov 25 12:14:14 crc kubenswrapper[4688]: W1125 12:14:14.037142 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:14 crc kubenswrapper[4688]: E1125 12:14:14.037288 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:14 crc kubenswrapper[4688]: W1125 12:14:14.368674 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:14 crc kubenswrapper[4688]: E1125 12:14:14.368781 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.636269 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.781055 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235"} Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.781117 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c"} Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.784415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025"} Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.784439 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5"} Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.786314 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7" exitCode=0 Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.786376 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7"} Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.786503 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.787766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.787795 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.787807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.790280 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7eb78509ebf87118b254c047fbe3d4c62ba269a7a3eadcf330a2015c709ebb2b"} Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.790331 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.791277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.791316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.791333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:14 crc kubenswrapper[4688]: I1125 12:14:14.793984 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba"} Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.636191 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:15 crc kubenswrapper[4688]: E1125 12:14:15.771267 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.159:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b3ee9434b70c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 12:14:10.623000776 +0000 UTC m=+0.732629684,LastTimestamp:2025-11-25 12:14:10.623000776 +0000 UTC m=+0.732629684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.800687 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08"} Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.800790 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec"} Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.803877 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d"} Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.803952 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.805316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.805368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.805383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.807908 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d"} Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.807965 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.808970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.809023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.809035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.810701 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229" exitCode=0 Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.810773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229"} Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.810785 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.810904 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.812025 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.812073 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.812089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.812362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.812413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:15 crc kubenswrapper[4688]: I1125 12:14:15.812432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.636397 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.815044 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e1038eaa2eb60918e3bbb2a09df7af4103e7b0d3d752b3771b9a8ea69b34b7f2"} Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.815873 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda"} Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.815400 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.817486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.817608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.817684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.821418 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.821510 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.821924 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a"} Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.821998 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173"} Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.822093 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.823118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.823191 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.823242 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.823702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.823771 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:16 crc kubenswrapper[4688]: I1125 12:14:16.823822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:16 crc kubenswrapper[4688]: E1125 12:14:16.861192 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="6.4s" Nov 25 12:14:17 crc kubenswrapper[4688]: W1125 12:14:17.008681 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:17 crc kubenswrapper[4688]: E1125 12:14:17.009008 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:17 crc kubenswrapper[4688]: W1125 12:14:17.010120 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:17 crc kubenswrapper[4688]: E1125 12:14:17.010217 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.168620 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.170016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.170060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.170071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.170093 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 12:14:17 crc kubenswrapper[4688]: E1125 12:14:17.170614 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.159:6443: connect: connection refused" node="crc" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.500817 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:17 crc kubenswrapper[4688]: W1125 12:14:17.585690 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:17 crc kubenswrapper[4688]: E1125 12:14:17.585804 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.159:6443: connect: connection refused" logger="UnhandledError" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.635921 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.667021 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.667301 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.667402 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.782656 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.830142 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.830789 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.831148 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14"} Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.831186 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5"} Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.831198 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727"} Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.831213 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.831276 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.832172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.832205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.832215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.832602 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.832733 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.832746 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.833095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.833137 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:17 crc kubenswrapper[4688]: I1125 12:14:17.833148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.430336 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.636240 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.159:6443: connect: connection refused Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.837401 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.841187 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e1038eaa2eb60918e3bbb2a09df7af4103e7b0d3d752b3771b9a8ea69b34b7f2" exitCode=255 Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.841296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e1038eaa2eb60918e3bbb2a09df7af4103e7b0d3d752b3771b9a8ea69b34b7f2"} Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.841360 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.841360 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.841634 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.842904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.842988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.843016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.843738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.843802 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.843820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.843871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.843941 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.843965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:18 crc kubenswrapper[4688]: I1125 12:14:18.845275 4688 scope.go:117] "RemoveContainer" containerID="e1038eaa2eb60918e3bbb2a09df7af4103e7b0d3d752b3771b9a8ea69b34b7f2" Nov 25 12:14:19 crc kubenswrapper[4688]: I1125 12:14:19.847902 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 12:14:19 crc kubenswrapper[4688]: I1125 12:14:19.850861 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67"} Nov 25 12:14:19 crc kubenswrapper[4688]: I1125 12:14:19.851233 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:19 crc kubenswrapper[4688]: I1125 12:14:19.852816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:19 crc kubenswrapper[4688]: I1125 12:14:19.852872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:19 crc kubenswrapper[4688]: I1125 12:14:19.852894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.602009 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.602187 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.603456 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.603498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.603509 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.854039 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.854176 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.855427 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.855484 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.855497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:20 crc kubenswrapper[4688]: E1125 12:14:20.866126 4688 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.923444 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.923657 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.925150 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.925187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:20 crc kubenswrapper[4688]: I1125 12:14:20.925218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:21 crc kubenswrapper[4688]: I1125 12:14:21.856332 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:21 crc kubenswrapper[4688]: I1125 12:14:21.857286 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:21 crc kubenswrapper[4688]: I1125 12:14:21.857315 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:21 crc kubenswrapper[4688]: I1125 12:14:21.857324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.050675 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.050932 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.052388 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.052432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.052446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.060707 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.571278 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.572853 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.572892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.572903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.572932 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.861454 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.863125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.863179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.863197 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:23 crc kubenswrapper[4688]: I1125 12:14:23.867115 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.624568 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.705396 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.705659 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.707187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.707230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.707240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.864034 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.865152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.865208 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:24 crc kubenswrapper[4688]: I1125 12:14:24.865222 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:25 crc kubenswrapper[4688]: I1125 12:14:25.866335 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:25 crc kubenswrapper[4688]: I1125 12:14:25.867393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:25 crc kubenswrapper[4688]: I1125 12:14:25.867430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:25 crc kubenswrapper[4688]: I1125 12:14:25.867442 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:26 crc kubenswrapper[4688]: I1125 12:14:26.737085 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 25 12:14:26 crc kubenswrapper[4688]: I1125 12:14:26.737180 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 12:14:27 crc kubenswrapper[4688]: I1125 12:14:27.625179 4688 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 12:14:27 crc kubenswrapper[4688]: I1125 12:14:27.625313 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 12:14:27 crc kubenswrapper[4688]: I1125 12:14:27.675574 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]log ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]etcd ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/priority-and-fairness-filter ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-apiextensions-informers ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-apiextensions-controllers ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/crd-informer-synced ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-system-namespaces-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 25 12:14:27 crc kubenswrapper[4688]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 25 12:14:27 crc kubenswrapper[4688]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/bootstrap-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/start-kube-aggregator-informers ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/apiservice-registration-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/apiservice-discovery-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]autoregister-completion ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/apiservice-openapi-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 25 12:14:27 crc kubenswrapper[4688]: livez check failed Nov 25 12:14:27 crc kubenswrapper[4688]: I1125 12:14:27.675643 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:14:30 crc kubenswrapper[4688]: E1125 12:14:30.867066 4688 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 12:14:31 crc kubenswrapper[4688]: I1125 12:14:31.730467 4688 trace.go:236] Trace[195518961]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 12:14:19.854) (total time: 11876ms): Nov 25 12:14:31 crc kubenswrapper[4688]: Trace[195518961]: ---"Objects listed" error: 11876ms (12:14:31.730) Nov 25 12:14:31 crc kubenswrapper[4688]: Trace[195518961]: [11.876225514s] [11.876225514s] END Nov 25 12:14:31 crc kubenswrapper[4688]: I1125 12:14:31.730545 4688 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 12:14:31 crc kubenswrapper[4688]: E1125 12:14:31.734006 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 12:14:31 crc kubenswrapper[4688]: I1125 12:14:31.735000 4688 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 12:14:31 crc kubenswrapper[4688]: I1125 12:14:31.735029 4688 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 12:14:31 crc kubenswrapper[4688]: I1125 12:14:31.737250 4688 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.558834 4688 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.594003 4688 apiserver.go:52] "Watching apiserver" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.597757 4688 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.598256 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.598796 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.599539 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.599602 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.599677 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.599748 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.599692 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.599785 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.599903 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.600020 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.602941 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.602948 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.602949 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.603333 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.603347 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.603388 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.604062 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.604113 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.604170 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.639462 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.654228 4688 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.661438 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.677242 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.678196 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.678301 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.680739 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.683462 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.694309 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.695135 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.712026 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.729127 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.742508 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.742654 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.742706 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.742751 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.742846 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743078 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743146 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743193 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743242 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743301 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743197 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.743375 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:14:33.243314185 +0000 UTC m=+23.352943053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743842 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743905 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743954 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744003 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744056 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744110 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744158 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744257 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744307 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744356 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744405 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744465 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744512 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744598 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743996 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744393 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744649 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744705 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744757 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744804 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744862 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744914 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744969 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745018 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745067 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745118 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745165 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745225 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745285 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745342 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743763 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744155 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.743536 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745404 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744150 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745426 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745483 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745569 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745624 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745673 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745721 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745773 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745841 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745890 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745988 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746037 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746104 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746158 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746274 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746363 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746424 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746474 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746800 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746870 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746940 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746997 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747064 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747134 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747181 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747248 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747300 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747352 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747416 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747475 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747566 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747636 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747705 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747752 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747906 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748119 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748219 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748287 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748363 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748434 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748498 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748588 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748670 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748737 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748790 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748841 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748941 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748997 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749054 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749106 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749166 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749229 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749293 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749346 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749405 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744594 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.744699 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745066 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745246 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745183 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745378 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745568 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745594 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745691 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.745915 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746035 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746071 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746143 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746267 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746288 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746472 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746463 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746684 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746792 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.746870 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747143 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747165 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747206 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747297 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.747544 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748025 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748252 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748350 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748629 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748737 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.748959 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749027 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749089 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749218 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749268 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749386 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749464 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749692 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749718 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749689 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749688 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749771 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.750292 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.750559 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.750771 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.750787 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.751071 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.751213 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.751268 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.751364 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.751575 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.751599 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.751715 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.752055 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.752550 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.752830 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.753002 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.753141 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.753678 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754187 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754689 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754749 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.749468 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754830 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754861 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754893 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754887 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754923 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754948 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754972 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754994 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.754977 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755095 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755255 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755275 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755326 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755355 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755382 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755409 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755437 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755465 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755488 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755513 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755559 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755585 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755637 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755674 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755687 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755736 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755794 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755890 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.755964 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756029 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756097 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756158 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756192 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756258 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756316 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756380 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756438 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756498 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756441 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756592 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756624 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756650 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756704 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756763 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756823 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756878 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756932 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.756991 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757045 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757096 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757145 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757202 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757247 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757278 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757301 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757348 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757358 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757447 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757776 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757837 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757899 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757964 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758025 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758093 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758150 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758204 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758259 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758316 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758372 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758420 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758463 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758497 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758585 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758635 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758671 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758707 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758749 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758789 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758837 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758887 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758936 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758981 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759037 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759088 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759138 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759201 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759250 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759293 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759340 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759385 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759429 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759485 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759589 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759752 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759800 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759833 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759869 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759906 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759942 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759974 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760008 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760180 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760227 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760303 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760338 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760417 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760452 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760488 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760566 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760611 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760646 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760681 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760745 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760824 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761131 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761189 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761244 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761286 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761330 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761366 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761407 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761458 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761602 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761660 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761711 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762069 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762106 4688 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762129 4688 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762151 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762173 4688 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762194 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762213 4688 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762233 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762254 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762274 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762297 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762319 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762339 4688 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762358 4688 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762421 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762443 4688 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762463 4688 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762483 4688 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762505 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762562 4688 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762595 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762615 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762635 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762654 4688 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762673 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762692 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762712 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762731 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762751 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762772 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762795 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762814 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762833 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762854 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762874 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762893 4688 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762913 4688 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762933 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762953 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762972 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762995 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763014 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763033 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763053 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763074 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763092 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763115 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763136 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763154 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763175 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763194 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763214 4688 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763233 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763252 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763270 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763289 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763309 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763327 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763347 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763367 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763387 4688 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763412 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763440 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763464 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763490 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763517 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763576 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763603 4688 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763631 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763657 4688 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763682 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763708 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763734 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763758 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.764704 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.766313 4688 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757903 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.757969 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758629 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758642 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759013 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.758848 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759147 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759451 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.759649 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760140 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760157 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760232 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760318 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760466 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760612 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.760972 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761031 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.770651 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.771365 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.771635 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761197 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761441 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761479 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.761662 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762063 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762242 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762249 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762356 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762407 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762608 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.762875 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763042 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763595 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.763683 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.764368 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.764333 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.764442 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.764547 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.773515 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.773937 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.773959 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.765077 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.765261 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.765435 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.765856 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.766007 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.766102 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.766322 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.766407 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.766396 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.767195 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.767386 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.767418 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.768081 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.768215 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.768327 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.768520 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.769095 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.769420 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.769930 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.769949 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.769999 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.770215 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.774279 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:33.274250881 +0000 UTC m=+23.383879749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.764644 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.770443 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.774794 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.775399 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.775647 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.775885 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.776009 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.776138 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.776244 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.776465 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.777272 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.777363 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.777434 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.777463 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.777804 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.778135 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.776932 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.778266 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.778733 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.779282 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.779502 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.779713 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.779756 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.779839 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.780208 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.779943 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.780785 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.781119 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.781334 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.781429 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.781910 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.782465 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.782613 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.782859 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.782879 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.782970 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.783120 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.783321 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.783371 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.783436 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.783922 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.784045 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.784092 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.784133 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.784240 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.784258 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:33.284227024 +0000 UTC m=+23.393855912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.784697 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.785041 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.785050 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.785992 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.786067 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.786087 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.786110 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.786124 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.786562 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.787167 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.787607 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.789678 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.790048 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.790292 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.792930 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.794828 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.797187 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.797222 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.797239 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.797254 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.797299 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.797320 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.797331 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:33.297308052 +0000 UTC m=+23.406936930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.797405 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:33.297379164 +0000 UTC m=+23.407008042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.802750 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.808106 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.809081 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.814908 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.817730 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.822796 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.825777 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.829884 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.833676 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.845360 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1038eaa2eb60918e3bbb2a09df7af4103e7b0d3d752b3771b9a8ea69b34b7f2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:18Z\\\",\\\"message\\\":\\\"W1125 12:14:17.483917 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 12:14:17.484438 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764072857 cert, and key in /tmp/serving-cert-1031623111/serving-signer.crt, /tmp/serving-cert-1031623111/serving-signer.key\\\\nI1125 12:14:17.900258 1 observer_polling.go:159] Starting file observer\\\\nW1125 12:14:17.903632 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 12:14:17.903847 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:17.904829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031623111/tls.crt::/tmp/serving-cert-1031623111/tls.key\\\\\\\"\\\\nF1125 12:14:18.210015 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.863187 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.878141 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.886667 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.886777 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.886964 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887030 4688 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887105 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887028 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887123 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887135 4688 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887148 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887181 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887192 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887204 4688 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887218 4688 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887218 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887229 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887263 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887276 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887287 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887299 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887310 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887360 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887372 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887455 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887467 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887478 4688 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887490 4688 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887504 4688 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887755 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887889 4688 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887915 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887967 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.888007 4688 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.887887 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.888147 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.888285 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.888352 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.888433 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.888509 4688 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.888646 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889031 4688 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889099 4688 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889173 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889259 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889334 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889416 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889493 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889580 4688 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889647 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889751 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889824 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889893 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889957 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890015 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67"} Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890021 4688 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890187 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890208 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890225 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890263 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890279 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890296 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890311 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890324 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890339 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890351 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890362 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.889992 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67" exitCode=255 Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890112 4688 scope.go:117] "RemoveContainer" containerID="e1038eaa2eb60918e3bbb2a09df7af4103e7b0d3d752b3771b9a8ea69b34b7f2" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890493 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890517 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890549 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890562 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890574 4688 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890587 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890599 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890612 4688 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890625 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890639 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890651 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890665 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890677 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890690 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890702 4688 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890715 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890738 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890749 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890761 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890776 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890788 4688 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890801 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890814 4688 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890825 4688 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890837 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890849 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890861 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890874 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890885 4688 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890897 4688 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890761 4688 scope.go:117] "RemoveContainer" containerID="4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890907 4688 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890967 4688 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890982 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.890995 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891007 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891020 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891033 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891045 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891057 4688 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891069 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891081 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891147 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891283 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891313 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891327 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891340 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: E1125 12:14:32.891342 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891352 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891423 4688 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891441 4688 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891456 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891514 4688 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891573 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891587 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.891601 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.903843 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.912929 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.916214 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.921385 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.928734 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 12:14:32 crc kubenswrapper[4688]: W1125 12:14:32.929221 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-897bad0dd881c52712088f74a1d5201ac7af2e3b601b23345817cbbd7be99ce0 WatchSource:0}: Error finding container 897bad0dd881c52712088f74a1d5201ac7af2e3b601b23345817cbbd7be99ce0: Status 404 returned error can't find the container with id 897bad0dd881c52712088f74a1d5201ac7af2e3b601b23345817cbbd7be99ce0 Nov 25 12:14:32 crc kubenswrapper[4688]: W1125 12:14:32.932100 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b44fb5b1aaa5e55698d376c8d33c14c828edb4136fda3ad0e9994eaac54ae64f WatchSource:0}: Error finding container b44fb5b1aaa5e55698d376c8d33c14c828edb4136fda3ad0e9994eaac54ae64f: Status 404 returned error can't find the container with id b44fb5b1aaa5e55698d376c8d33c14c828edb4136fda3ad0e9994eaac54ae64f Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.932188 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.947912 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.961442 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1038eaa2eb60918e3bbb2a09df7af4103e7b0d3d752b3771b9a8ea69b34b7f2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:18Z\\\",\\\"message\\\":\\\"W1125 12:14:17.483917 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 12:14:17.484438 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764072857 cert, and key in /tmp/serving-cert-1031623111/serving-signer.crt, /tmp/serving-cert-1031623111/serving-signer.key\\\\nI1125 12:14:17.900258 1 observer_polling.go:159] Starting file observer\\\\nW1125 12:14:17.903632 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 12:14:17.903847 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:17.904829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031623111/tls.crt::/tmp/serving-cert-1031623111/tls.key\\\\\\\"\\\\nF1125 12:14:18.210015 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.976056 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:32 crc kubenswrapper[4688]: I1125 12:14:32.988099 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.295567 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.295656 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.295693 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.295716 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:14:34.295690326 +0000 UTC m=+24.405319184 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.295806 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.295808 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.295843 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:34.295837191 +0000 UTC m=+24.405466059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.295855 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:34.295850352 +0000 UTC m=+24.405479220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.396554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.396647 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.396817 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.396844 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.396864 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.396960 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:34.396931905 +0000 UTC m=+24.506560813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.396954 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.397060 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.397146 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.397304 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:34.397245683 +0000 UTC m=+24.506874581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.895145 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a1bde4e776ce13c0cd41fb3bb7414ed38a43a576a155f26a14459a2505ed4843"} Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.897516 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066"} Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.897610 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8"} Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.897638 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b44fb5b1aaa5e55698d376c8d33c14c828edb4136fda3ad0e9994eaac54ae64f"} Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.899896 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf"} Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.899951 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"897bad0dd881c52712088f74a1d5201ac7af2e3b601b23345817cbbd7be99ce0"} Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.903066 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.907033 4688 scope.go:117] "RemoveContainer" containerID="4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67" Nov 25 12:14:33 crc kubenswrapper[4688]: E1125 12:14:33.907495 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.926674 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:33Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.954695 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:33Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.971842 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:33Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:33 crc kubenswrapper[4688]: I1125 12:14:33.988932 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:33Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.009366 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.029690 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.053879 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1038eaa2eb60918e3bbb2a09df7af4103e7b0d3d752b3771b9a8ea69b34b7f2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:18Z\\\",\\\"message\\\":\\\"W1125 12:14:17.483917 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 12:14:17.484438 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764072857 cert, and key in /tmp/serving-cert-1031623111/serving-signer.crt, /tmp/serving-cert-1031623111/serving-signer.key\\\\nI1125 12:14:17.900258 1 observer_polling.go:159] Starting file observer\\\\nW1125 12:14:17.903632 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 12:14:17.903847 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:17.904829 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1031623111/tls.crt::/tmp/serving-cert-1031623111/tls.key\\\\\\\"\\\\nF1125 12:14:18.210015 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.077506 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.099944 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.117041 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.133658 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.153819 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.173732 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.188125 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.303975 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.304183 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:14:36.304152537 +0000 UTC m=+26.413781405 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.304402 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.304608 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.304558 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.304752 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.305020 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:36.304906757 +0000 UTC m=+26.414535625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.305168 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:36.305155704 +0000 UTC m=+26.414784572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.405149 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.405239 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.405340 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.405367 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.405340 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.405382 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.405389 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.405399 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.405439 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:36.405422776 +0000 UTC m=+26.515051664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.405457 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:36.405448446 +0000 UTC m=+26.515077334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.631693 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.636844 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.642514 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.644847 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.657809 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.672273 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.686297 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.701569 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.721745 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.736021 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.738322 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.738907 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.738961 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.738973 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.739056 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.739166 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:34 crc kubenswrapper[4688]: E1125 12:14:34.739256 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.742471 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.742989 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.744231 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.744876 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.746454 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.747452 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.748587 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.749930 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.750795 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.752320 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.753189 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.755113 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.756057 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.756309 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.756974 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.757573 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.758083 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.758857 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.759274 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.759983 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.760648 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.761133 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.761688 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.762106 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.762757 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.763240 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.765456 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.766370 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.766957 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.767669 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.768264 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.770409 4688 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.770561 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.772586 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.773111 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.774150 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.774843 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.775737 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.776391 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.777456 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.778258 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.779544 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.780106 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.781251 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.782049 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.783202 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.783877 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.784958 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.785544 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.786687 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.787143 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.787969 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.788417 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.789291 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.789862 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.790311 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.791175 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.791209 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.792754 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.806442 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.834755 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.869553 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.881615 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.894703 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.908576 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.922042 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.932670 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.949388 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.960617 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.971191 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.985034 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:34 crc kubenswrapper[4688]: I1125 12:14:34.997727 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:34Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:35 crc kubenswrapper[4688]: I1125 12:14:35.009473 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:35Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:35 crc kubenswrapper[4688]: I1125 12:14:35.912612 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8"} Nov 25 12:14:35 crc kubenswrapper[4688]: I1125 12:14:35.945700 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:35Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:35 crc kubenswrapper[4688]: I1125 12:14:35.965412 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:35Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:35 crc kubenswrapper[4688]: I1125 12:14:35.979162 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:35Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:35 crc kubenswrapper[4688]: I1125 12:14:35.991854 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:35Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.004326 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.015178 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.029788 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.050308 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.065495 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.319418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.319500 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.319554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.319685 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:14:40.319648908 +0000 UTC m=+30.429277786 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.319703 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.319802 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:40.319782632 +0000 UTC m=+30.429411500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.320149 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.320198 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:40.320189424 +0000 UTC m=+30.429818362 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.420849 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.420908 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.421047 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.421066 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.421079 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.421131 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:40.421115713 +0000 UTC m=+30.530744581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.421047 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.421167 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.421180 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.421217 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:40.421203285 +0000 UTC m=+30.530832163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.664660 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-cvktt"] Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.664961 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cvktt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.667055 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.667345 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.667904 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.692004 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.715956 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.722704 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dceabd45-6147-4016-b8ba-5d3dac35df54-hosts-file\") pod \"node-resolver-cvktt\" (UID: \"dceabd45-6147-4016-b8ba-5d3dac35df54\") " pod="openshift-dns/node-resolver-cvktt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.722781 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flhjm\" (UniqueName: \"kubernetes.io/projected/dceabd45-6147-4016-b8ba-5d3dac35df54-kube-api-access-flhjm\") pod \"node-resolver-cvktt\" (UID: \"dceabd45-6147-4016-b8ba-5d3dac35df54\") " pod="openshift-dns/node-resolver-cvktt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.733668 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.739733 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.739778 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.739843 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.739986 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.740106 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:36 crc kubenswrapper[4688]: E1125 12:14:36.740235 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.746178 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.761366 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.774190 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.789097 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.800482 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.814019 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.823779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flhjm\" (UniqueName: \"kubernetes.io/projected/dceabd45-6147-4016-b8ba-5d3dac35df54-kube-api-access-flhjm\") pod \"node-resolver-cvktt\" (UID: \"dceabd45-6147-4016-b8ba-5d3dac35df54\") " pod="openshift-dns/node-resolver-cvktt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.823828 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dceabd45-6147-4016-b8ba-5d3dac35df54-hosts-file\") pod \"node-resolver-cvktt\" (UID: \"dceabd45-6147-4016-b8ba-5d3dac35df54\") " pod="openshift-dns/node-resolver-cvktt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.823942 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dceabd45-6147-4016-b8ba-5d3dac35df54-hosts-file\") pod \"node-resolver-cvktt\" (UID: \"dceabd45-6147-4016-b8ba-5d3dac35df54\") " pod="openshift-dns/node-resolver-cvktt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.835409 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:36Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.844737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flhjm\" (UniqueName: \"kubernetes.io/projected/dceabd45-6147-4016-b8ba-5d3dac35df54-kube-api-access-flhjm\") pod \"node-resolver-cvktt\" (UID: \"dceabd45-6147-4016-b8ba-5d3dac35df54\") " pod="openshift-dns/node-resolver-cvktt" Nov 25 12:14:36 crc kubenswrapper[4688]: I1125 12:14:36.976739 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cvktt" Nov 25 12:14:36 crc kubenswrapper[4688]: W1125 12:14:36.994781 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddceabd45_6147_4016_b8ba_5d3dac35df54.slice/crio-89e037bc60fd2d9782174c025dc8383b8898224735cc34e8ed5cab0cb1651de4 WatchSource:0}: Error finding container 89e037bc60fd2d9782174c025dc8383b8898224735cc34e8ed5cab0cb1651de4: Status 404 returned error can't find the container with id 89e037bc60fd2d9782174c025dc8383b8898224735cc34e8ed5cab0cb1651de4 Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.121935 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6pql6"] Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.122284 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.122334 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xlfw5"] Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.122669 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.129393 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.129668 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.129719 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.129773 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.129796 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.129895 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.129912 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.129999 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.130666 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.131005 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.143062 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.167417 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.183291 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.193124 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.203187 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.222148 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.228790 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-multus-certs\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.228841 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-cni-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.228866 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-k8s-cni-cncf-io\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.228888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c1fd8b76-41b5-4979-be54-9c7441c21aca-rootfs\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.228915 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6c3971fa-9838-436e-97b1-be050abea83a-cni-binary-copy\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.228937 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-socket-dir-parent\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229007 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-conf-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229041 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-etc-kubernetes\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229098 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1fd8b76-41b5-4979-be54-9c7441c21aca-mcd-auth-proxy-config\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229128 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-hostroot\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229152 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-os-release\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229225 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1fd8b76-41b5-4979-be54-9c7441c21aca-proxy-tls\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229271 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x428r\" (UniqueName: \"kubernetes.io/projected/c1fd8b76-41b5-4979-be54-9c7441c21aca-kube-api-access-x428r\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229310 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-netns\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229342 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-cni-multus\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229361 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6c3971fa-9838-436e-97b1-be050abea83a-multus-daemon-config\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229391 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-kubelet\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229414 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r989l\" (UniqueName: \"kubernetes.io/projected/6c3971fa-9838-436e-97b1-be050abea83a-kube-api-access-r989l\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229445 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-system-cni-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229462 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-cnibin\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.229481 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-cni-bin\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.235182 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.248984 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.262501 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.279790 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.295326 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.312815 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330756 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x428r\" (UniqueName: \"kubernetes.io/projected/c1fd8b76-41b5-4979-be54-9c7441c21aca-kube-api-access-x428r\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330796 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-os-release\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330816 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1fd8b76-41b5-4979-be54-9c7441c21aca-proxy-tls\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-netns\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330860 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-cni-multus\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330876 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6c3971fa-9838-436e-97b1-be050abea83a-multus-daemon-config\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330898 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-cni-bin\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330919 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-kubelet\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330941 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r989l\" (UniqueName: \"kubernetes.io/projected/6c3971fa-9838-436e-97b1-be050abea83a-kube-api-access-r989l\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330959 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-system-cni-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-cnibin\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.330989 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-k8s-cni-cncf-io\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331010 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-multus-certs\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331033 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-cni-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331050 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c1fd8b76-41b5-4979-be54-9c7441c21aca-rootfs\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331049 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-os-release\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331071 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6c3971fa-9838-436e-97b1-be050abea83a-cni-binary-copy\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331103 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-netns\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331146 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-socket-dir-parent\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331223 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-cnibin\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331199 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-conf-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331263 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-cni-bin\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331271 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-etc-kubernetes\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331299 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-multus-certs\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331338 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1fd8b76-41b5-4979-be54-9c7441c21aca-mcd-auth-proxy-config\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331339 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-run-k8s-cni-cncf-io\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331356 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-kubelet\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331361 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-hostroot\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331386 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-conf-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331382 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-hostroot\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331438 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c1fd8b76-41b5-4979-be54-9c7441c21aca-rootfs\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331343 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-host-var-lib-cni-multus\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331310 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-socket-dir-parent\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331487 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-system-cni-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331554 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-multus-cni-dir\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.331299 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6c3971fa-9838-436e-97b1-be050abea83a-etc-kubernetes\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.332025 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1fd8b76-41b5-4979-be54-9c7441c21aca-mcd-auth-proxy-config\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.332027 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6c3971fa-9838-436e-97b1-be050abea83a-multus-daemon-config\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.332125 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6c3971fa-9838-436e-97b1-be050abea83a-cni-binary-copy\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.333723 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.335328 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1fd8b76-41b5-4979-be54-9c7441c21aca-proxy-tls\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.345740 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.347761 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x428r\" (UniqueName: \"kubernetes.io/projected/c1fd8b76-41b5-4979-be54-9c7441c21aca-kube-api-access-x428r\") pod \"machine-config-daemon-6pql6\" (UID: \"c1fd8b76-41b5-4979-be54-9c7441c21aca\") " pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.348773 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r989l\" (UniqueName: \"kubernetes.io/projected/6c3971fa-9838-436e-97b1-be050abea83a-kube-api-access-r989l\") pod \"multus-xlfw5\" (UID: \"6c3971fa-9838-436e-97b1-be050abea83a\") " pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.382875 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.397999 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.409941 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.424002 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.454295 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.457987 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.468684 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: W1125 12:14:37.472807 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1fd8b76_41b5_4979_be54_9c7441c21aca.slice/crio-6f6b4c4270d50b5e9109390de88a7193eb90f75343a003aec71dab81931991fd WatchSource:0}: Error finding container 6f6b4c4270d50b5e9109390de88a7193eb90f75343a003aec71dab81931991fd: Status 404 returned error can't find the container with id 6f6b4c4270d50b5e9109390de88a7193eb90f75343a003aec71dab81931991fd Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.474632 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xlfw5" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.493997 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.563391 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.564048 4688 scope.go:117] "RemoveContainer" containerID="4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67" Nov 25 12:14:37 crc kubenswrapper[4688]: E1125 12:14:37.564205 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.572995 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.573271 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-csgdv"] Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.573980 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.579678 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gnhtg"] Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.580288 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.584772 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.584964 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.585807 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.585832 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.585938 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.586047 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.587401 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.590114 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.590184 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.617425 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634403 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634434 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wt7j\" (UniqueName: \"kubernetes.io/projected/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-kube-api-access-2wt7j\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634454 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-bin\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634473 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634556 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-etc-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634612 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-config\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-system-cni-dir\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634659 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-netd\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634703 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-ovn\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634721 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-kubelet\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634738 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-ovn-kubernetes\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634753 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-script-lib\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634769 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634784 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-slash\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634805 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-var-lib-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634829 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95pk\" (UniqueName: \"kubernetes.io/projected/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-kube-api-access-z95pk\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634867 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-node-log\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634882 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-env-overrides\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634908 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cnibin\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634943 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-os-release\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634977 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-systemd-units\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.634991 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-systemd\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.635009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovn-node-metrics-cert\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.635022 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.635035 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-log-socket\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.635048 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-netns\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.635072 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.637477 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.673153 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.686025 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.703054 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.715571 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.732248 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735763 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-ovn-kubernetes\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735794 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-script-lib\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735813 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735833 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-slash\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735861 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-var-lib-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735881 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z95pk\" (UniqueName: \"kubernetes.io/projected/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-kube-api-access-z95pk\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735902 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-node-log\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735923 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-env-overrides\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735930 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-var-lib-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735944 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cnibin\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735967 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-slash\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736006 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-os-release\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735898 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-ovn-kubernetes\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.735992 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cnibin\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736090 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-systemd-units\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736121 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-systemd\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736106 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-node-log\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736151 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovn-node-metrics-cert\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736143 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-os-release\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736177 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736202 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-log-socket\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736208 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-systemd\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736227 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-netns\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736238 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-log-socket\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736260 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736281 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-netns\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736208 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-systemd-units\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736274 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736317 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736343 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wt7j\" (UniqueName: \"kubernetes.io/projected/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-kube-api-access-2wt7j\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736362 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-bin\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736378 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736394 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-etc-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736410 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-config\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736427 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-system-cni-dir\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736321 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736444 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-netd\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736469 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-netd\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736506 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736511 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-ovn\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736546 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-etc-openvswitch\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736572 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-bin\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736599 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-system-cni-dir\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736608 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-kubelet\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736590 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-kubelet\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736870 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-env-overrides\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736954 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-script-lib\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736970 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-ovn\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.736960 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.737051 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-config\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.737063 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.739904 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovn-node-metrics-cert\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.747013 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.755736 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wt7j\" (UniqueName: \"kubernetes.io/projected/f2f03eab-5a08-4ebf-8a2e-0871c9fcee61-kube-api-access-2wt7j\") pod \"multus-additional-cni-plugins-gnhtg\" (UID: \"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\") " pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.757763 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z95pk\" (UniqueName: \"kubernetes.io/projected/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-kube-api-access-z95pk\") pod \"ovnkube-node-csgdv\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.759313 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.785453 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.803189 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.816367 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.836480 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.854113 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.870169 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.885346 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.892805 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" Nov 25 12:14:37 crc kubenswrapper[4688]: W1125 12:14:37.903410 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f03eab_5a08_4ebf_8a2e_0871c9fcee61.slice/crio-636f2e5f45b3a02bbae8333b9aeb8b589cbe91d932b3de899223e8fe936954ff WatchSource:0}: Error finding container 636f2e5f45b3a02bbae8333b9aeb8b589cbe91d932b3de899223e8fe936954ff: Status 404 returned error can't find the container with id 636f2e5f45b3a02bbae8333b9aeb8b589cbe91d932b3de899223e8fe936954ff Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.921494 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.921558 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.921574 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"6f6b4c4270d50b5e9109390de88a7193eb90f75343a003aec71dab81931991fd"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.924433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cvktt" event={"ID":"dceabd45-6147-4016-b8ba-5d3dac35df54","Type":"ContainerStarted","Data":"a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.924466 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cvktt" event={"ID":"dceabd45-6147-4016-b8ba-5d3dac35df54","Type":"ContainerStarted","Data":"89e037bc60fd2d9782174c025dc8383b8898224735cc34e8ed5cab0cb1651de4"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.927168 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"5ae30d5c11ebc64bae4a82a00c3572e3611fb12e00dfc381aac881784f70ec52"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.929720 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlfw5" event={"ID":"6c3971fa-9838-436e-97b1-be050abea83a","Type":"ContainerStarted","Data":"90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.929755 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlfw5" event={"ID":"6c3971fa-9838-436e-97b1-be050abea83a","Type":"ContainerStarted","Data":"1179a9a38435596768141c016aef438d5dca0b4448e06ad1df265b9a00a7b5da"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.931370 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" event={"ID":"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61","Type":"ContainerStarted","Data":"636f2e5f45b3a02bbae8333b9aeb8b589cbe91d932b3de899223e8fe936954ff"} Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.948010 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.966380 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.980265 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:37 crc kubenswrapper[4688]: I1125 12:14:37.997605 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.014579 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.030208 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.043804 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.056788 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.071697 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.082052 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.101320 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.116653 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.133177 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.145949 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.160817 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.175764 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.190316 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.204481 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.216516 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.229283 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.240119 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.258816 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.274652 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.287591 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.297517 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.320580 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.337506 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.350742 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.735003 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.738637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.738697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.738710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.739041 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.739307 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.739308 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.739447 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.739476 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.739769 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.739929 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.747934 4688 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.748349 4688 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.749714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.749749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.749764 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.749787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.749802 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:38Z","lastTransitionTime":"2025-11-25T12:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.774276 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.779375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.779431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.779440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.779458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.779470 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:38Z","lastTransitionTime":"2025-11-25T12:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.792310 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.795987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.796037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.796048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.796064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.796074 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:38Z","lastTransitionTime":"2025-11-25T12:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.808432 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.811950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.811995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.812011 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.812032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.812044 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:38Z","lastTransitionTime":"2025-11-25T12:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.827216 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.831984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.832038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.832049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.832069 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.832084 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:38Z","lastTransitionTime":"2025-11-25T12:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.845736 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: E1125 12:14:38.846106 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.847936 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.847973 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.847983 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.847998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.848009 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:38Z","lastTransitionTime":"2025-11-25T12:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.936674 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2f03eab-5a08-4ebf-8a2e-0871c9fcee61" containerID="4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852" exitCode=0 Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.936766 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" event={"ID":"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61","Type":"ContainerDied","Data":"4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852"} Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.938210 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d" exitCode=0 Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.938234 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.951548 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.951590 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.951601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.951619 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.951630 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:38Z","lastTransitionTime":"2025-11-25T12:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.959458 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.974290 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:38 crc kubenswrapper[4688]: I1125 12:14:38.989229 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.011818 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.035546 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.056813 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.058122 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.058175 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.058187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.058205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.058217 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.074358 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.088671 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.118687 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.146052 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.162577 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.162614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.162624 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.162638 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.162651 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.163995 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.191833 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.207915 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.223593 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.249874 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.264359 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.265037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.265064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.265073 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.265088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.265097 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.278726 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.292022 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.313838 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.327584 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.340373 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.354121 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.368634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.368692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.368705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.368724 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.368737 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.374228 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.389036 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.400628 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.415121 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.430988 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.447300 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.473247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.473310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.473324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.473344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.473359 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.575470 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.575509 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.575535 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.575554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.575566 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.678484 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.678554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.678574 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.678597 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.678609 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.786158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.786242 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.786272 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.786298 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.786316 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.839439 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kxr8q"] Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.839903 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.843285 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.844501 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.845715 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.845883 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.865738 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.877388 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.889479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.889550 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.889563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.889582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.889594 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.894309 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.907555 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.923873 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.937820 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.944279 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.944320 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.944331 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.944342 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.944351 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.944360 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.946020 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2f03eab-5a08-4ebf-8a2e-0871c9fcee61" containerID="ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593" exitCode=0 Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.946055 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" event={"ID":"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61","Type":"ContainerDied","Data":"ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.957730 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4c79c4-3997-43ef-9c05-4fe44ff31141-serviceca\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.957763 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4c79c4-3997-43ef-9c05-4fe44ff31141-host\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.957781 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5lzx\" (UniqueName: \"kubernetes.io/projected/cf4c79c4-3997-43ef-9c05-4fe44ff31141-kube-api-access-s5lzx\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.962561 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.977248 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.992039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.992117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.992130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.992147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.992196 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:39Z","lastTransitionTime":"2025-11-25T12:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:39 crc kubenswrapper[4688]: I1125 12:14:39.992500 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.010386 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.024702 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.041319 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.058433 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.058559 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4c79c4-3997-43ef-9c05-4fe44ff31141-serviceca\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.058618 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4c79c4-3997-43ef-9c05-4fe44ff31141-host\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.058648 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5lzx\" (UniqueName: \"kubernetes.io/projected/cf4c79c4-3997-43ef-9c05-4fe44ff31141-kube-api-access-s5lzx\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.058803 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4c79c4-3997-43ef-9c05-4fe44ff31141-host\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.059898 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4c79c4-3997-43ef-9c05-4fe44ff31141-serviceca\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.084074 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5lzx\" (UniqueName: \"kubernetes.io/projected/cf4c79c4-3997-43ef-9c05-4fe44ff31141-kube-api-access-s5lzx\") pod \"node-ca-kxr8q\" (UID: \"cf4c79c4-3997-43ef-9c05-4fe44ff31141\") " pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.095207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.095257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.095268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.095288 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.095298 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.121157 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.135633 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.154439 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kxr8q" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.156631 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.172031 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: W1125 12:14:40.176614 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf4c79c4_3997_43ef_9c05_4fe44ff31141.slice/crio-9c9bf2d90b9c45cc59a61acd228f7ad967996bdb45aacd19d1cd4b0519487447 WatchSource:0}: Error finding container 9c9bf2d90b9c45cc59a61acd228f7ad967996bdb45aacd19d1cd4b0519487447: Status 404 returned error can't find the container with id 9c9bf2d90b9c45cc59a61acd228f7ad967996bdb45aacd19d1cd4b0519487447 Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.197265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.197316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.197325 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.197348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.197362 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.201714 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.242647 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.279790 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.301650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.301729 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.301749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.301780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.301800 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.318497 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.360791 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.362821 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.362950 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.362992 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.363090 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:14:48.36306688 +0000 UTC m=+38.472695748 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.363140 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.363210 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:48.363196854 +0000 UTC m=+38.472825732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.363321 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.363504 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:48.363469191 +0000 UTC m=+38.473098089 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.399293 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.405120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.405159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.405172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.405190 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.405204 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.442464 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.464300 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.464361 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.464476 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.464492 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.464506 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.464564 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:48.464551546 +0000 UTC m=+38.574180414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.464843 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.464855 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.464863 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.464884 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 12:14:48.464877284 +0000 UTC m=+38.574506152 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.477025 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.508492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.508569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.508583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.508604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.508622 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.518146 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.562071 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.598135 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.611097 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.611145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.611158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.611177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.611190 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.639857 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.675480 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.713973 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.714025 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.714039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.714061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.714075 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.739763 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.739832 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.739902 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.739832 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.739977 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:40 crc kubenswrapper[4688]: E1125 12:14:40.740018 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.771901 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.790369 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.807595 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.816985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.817036 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.817048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.817066 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.817079 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.842381 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.883672 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.919792 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.919843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.919859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.919882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.919899 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:40Z","lastTransitionTime":"2025-11-25T12:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.923422 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.955864 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2f03eab-5a08-4ebf-8a2e-0871c9fcee61" containerID="b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675" exitCode=0 Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.955920 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" event={"ID":"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61","Type":"ContainerDied","Data":"b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.958253 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kxr8q" event={"ID":"cf4c79c4-3997-43ef-9c05-4fe44ff31141","Type":"ContainerStarted","Data":"290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.958322 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kxr8q" event={"ID":"cf4c79c4-3997-43ef-9c05-4fe44ff31141","Type":"ContainerStarted","Data":"9c9bf2d90b9c45cc59a61acd228f7ad967996bdb45aacd19d1cd4b0519487447"} Nov 25 12:14:40 crc kubenswrapper[4688]: I1125 12:14:40.960333 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.004037 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.027615 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.027657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.027669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.027684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.027696 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.041750 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.077350 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.116730 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.131014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.131389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.131398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.131412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.131422 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.176169 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.197313 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.235130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.235216 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.235233 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.235255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.235271 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.239796 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.280343 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.330067 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.337787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.337814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.337823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.337837 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.337847 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.356859 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.396146 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.436983 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.440954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.441003 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.441014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.441030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.441042 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.479330 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.524671 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.544010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.544070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.544081 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.544101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.544113 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.557254 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.598879 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.639028 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.647098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.647200 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.647227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.647261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.647281 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.677580 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.718229 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.749762 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.749805 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.749819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.749838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.749850 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.769789 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.797541 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.837206 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.851863 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.851900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.851914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.851934 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.851945 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.875715 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.954448 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.954500 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.954561 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.954587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.954607 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:41Z","lastTransitionTime":"2025-11-25T12:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.963303 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2f03eab-5a08-4ebf-8a2e-0871c9fcee61" containerID="7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60" exitCode=0 Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.963448 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" event={"ID":"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61","Type":"ContainerDied","Data":"7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.973173 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} Nov 25 12:14:41 crc kubenswrapper[4688]: I1125 12:14:41.988881 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.038137 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.057449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.057493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.057501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.057516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.057546 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.058177 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.070915 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.093490 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.116284 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.156725 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.160569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.160607 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.160616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.160631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.160640 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.195766 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.252265 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.263391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.263451 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.263467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.263491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.263508 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.280843 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.318837 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.360462 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.366241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.366284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.366293 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.366307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.366318 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.396359 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.438700 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.468665 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.468907 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.468919 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.468935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.468947 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.475055 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.570985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.571021 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.571031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.571231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.571244 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.673306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.673347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.673362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.673378 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.673389 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.739813 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.739892 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:42 crc kubenswrapper[4688]: E1125 12:14:42.740004 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.739818 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:42 crc kubenswrapper[4688]: E1125 12:14:42.740223 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:42 crc kubenswrapper[4688]: E1125 12:14:42.740310 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.781335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.781403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.781425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.781454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.781476 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.884637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.884696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.884714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.884737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.884753 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.980331 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2f03eab-5a08-4ebf-8a2e-0871c9fcee61" containerID="43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992" exitCode=0 Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.980392 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" event={"ID":"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61","Type":"ContainerDied","Data":"43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.994251 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.994337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.994352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.994398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.994413 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:42Z","lastTransitionTime":"2025-11-25T12:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:42 crc kubenswrapper[4688]: I1125 12:14:42.998296 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:42Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.030263 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.044288 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.065117 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.082277 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.096608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.096645 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.096656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.096672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.096682 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.098017 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.111717 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.134316 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.150045 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.170756 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.186401 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.196787 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.198769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.198811 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.198821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.198835 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.198865 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.209906 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.221225 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.244870 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.302482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.302547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.302562 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.302583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.302595 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.404807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.404846 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.404856 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.404875 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.404887 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.508111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.508170 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.508180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.508203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.508217 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.610920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.610966 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.610976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.610992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.611004 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.714228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.714287 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.714303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.714324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.714340 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.817592 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.817659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.817678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.817703 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.817724 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.920962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.921016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.921033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.921057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.921072 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:43Z","lastTransitionTime":"2025-11-25T12:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.986608 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2f03eab-5a08-4ebf-8a2e-0871c9fcee61" containerID="cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc" exitCode=0 Nov 25 12:14:43 crc kubenswrapper[4688]: I1125 12:14:43.986675 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" event={"ID":"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61","Type":"ContainerDied","Data":"cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.001288 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:43Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.017496 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.024246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.024282 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.024293 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.024311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.024324 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.039432 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.055101 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.070693 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.083948 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.099004 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.112788 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.126605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.126673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.126686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.126706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.126717 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.143991 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.163256 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.178759 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.191131 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.213454 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.229100 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.229134 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.229146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.229163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.229173 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.230483 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.244478 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:44Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.331223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.331260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.331270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.331283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.331293 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.435659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.435738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.435753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.435781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.435798 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.538831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.538887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.538904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.538930 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.538954 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.641517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.641605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.641622 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.641642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.641658 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.739853 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.739899 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:44 crc kubenswrapper[4688]: E1125 12:14:44.739986 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.739899 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:44 crc kubenswrapper[4688]: E1125 12:14:44.740099 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:44 crc kubenswrapper[4688]: E1125 12:14:44.740144 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.744820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.744867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.744883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.744904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.744920 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.847713 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.847746 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.847754 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.847766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.847774 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.949897 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.949955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.949972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.949993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.950007 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:44Z","lastTransitionTime":"2025-11-25T12:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.993009 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" event={"ID":"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61","Type":"ContainerStarted","Data":"4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.996884 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8"} Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.997224 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:44 crc kubenswrapper[4688]: I1125 12:14:44.997243 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.009849 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.024338 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.029200 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.032074 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.045196 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.052850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.052902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.052915 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.052940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.052954 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.057232 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.069880 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.082013 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.094433 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.107015 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.125935 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.144416 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.155850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.155878 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.155886 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.155899 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.155970 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.158921 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.171432 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.191903 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.205576 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.218677 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.233378 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.246720 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.259412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.259449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.259458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.259474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.259488 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.261561 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.280599 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.294034 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.309222 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.323470 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.336616 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.350563 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.362459 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.362487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.362497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.362511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.362531 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.367471 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.388960 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.402322 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.417585 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.426040 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.437989 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:45Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.465481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.465539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.465549 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.465564 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.465573 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.567987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.568034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.568045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.568060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.568069 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.670671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.670760 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.670780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.670797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.670808 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.773761 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.773816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.773834 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.773858 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.773892 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.876480 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.876571 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.876583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.876602 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.876615 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.978769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.978823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.978836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.978858 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:45 crc kubenswrapper[4688]: I1125 12:14:45.978873 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:45Z","lastTransitionTime":"2025-11-25T12:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.000418 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.082323 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.082380 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.082396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.082418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.082436 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.190605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.190653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.190663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.190678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.190687 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.293348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.293416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.293435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.293497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.293570 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.396893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.396929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.396940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.396955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.396966 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.499643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.499720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.499738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.499765 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.499794 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.602797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.602848 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.602859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.602874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.602887 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.705838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.705882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.705893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.705914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.705927 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.739471 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.739663 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.739931 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:46 crc kubenswrapper[4688]: E1125 12:14:46.739930 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:46 crc kubenswrapper[4688]: E1125 12:14:46.740063 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:46 crc kubenswrapper[4688]: E1125 12:14:46.740423 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.809092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.809145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.809157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.809177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.809191 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.911395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.911433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.911444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.911456 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:46 crc kubenswrapper[4688]: I1125 12:14:46.911465 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:46Z","lastTransitionTime":"2025-11-25T12:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.004115 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.013513 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.013597 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.013606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.013623 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.013635 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.116438 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.116493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.116510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.116569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.116583 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.218560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.218609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.218625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.218643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.218655 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.321202 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.321238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.321246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.321269 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.321284 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.423420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.423470 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.423483 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.423500 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.423512 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.525960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.525991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.525999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.526012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.526021 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.628648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.628684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.628695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.628709 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.628721 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.731775 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.731820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.731833 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.731849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.731861 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.834913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.834974 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.834993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.835016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.835032 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.937344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.937395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.937405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.937419 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:47 crc kubenswrapper[4688]: I1125 12:14:47.937429 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:47Z","lastTransitionTime":"2025-11-25T12:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.008809 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/0.log" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.012468 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8" exitCode=1 Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.012549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.018476 4688 scope.go:117] "RemoveContainer" containerID="8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.034896 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.039551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.039584 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.039596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.039612 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.039623 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.050497 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.062634 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.079837 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.094118 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.115092 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.129077 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.149297 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.160178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.160217 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.160228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.160244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.160256 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.175141 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.200667 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.219168 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.239185 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.255872 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.262499 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.262537 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.262546 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.262558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.262568 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.281017 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.296932 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:48Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.364574 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.364706 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:15:04.364682545 +0000 UTC m=+54.474311413 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.364994 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.365115 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.365168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.365184 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:15:04.365170788 +0000 UTC m=+54.474799666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.365186 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.365120 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.365213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.365289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.365302 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.365491 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.365714 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:15:04.365695603 +0000 UTC m=+54.475324471 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.465782 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.465853 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.466001 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.466020 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.466032 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.466078 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 12:15:04.466064416 +0000 UTC m=+54.575693284 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.466111 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.466180 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.466195 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.466274 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 12:15:04.466251251 +0000 UTC m=+54.575880119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.467467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.467498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.467506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.467535 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.467545 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.570093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.570132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.570142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.570171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.570183 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.672631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.672673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.672683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.672700 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.672711 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.739371 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.739410 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.739371 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.739516 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.739660 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:48 crc kubenswrapper[4688]: E1125 12:14:48.739733 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.774992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.775035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.775046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.775060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.775070 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.878440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.878475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.878486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.878501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.878517 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.981609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.981666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.981683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.981704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:48 crc kubenswrapper[4688]: I1125 12:14:48.981717 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:48Z","lastTransitionTime":"2025-11-25T12:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.017707 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/0.log" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.021269 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.021411 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.041572 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.056666 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.071786 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.084125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.084178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.084192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.084210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.084223 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.092912 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.107414 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.122231 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.135225 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.155790 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.175909 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.186544 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.186600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.186614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.186679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.186694 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.190776 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.205574 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.213082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.213120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.213133 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.213151 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.213165 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.224633 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: E1125 12:14:49.232111 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.235742 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.235769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.235777 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.235790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.235799 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.246968 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: E1125 12:14:49.253712 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.258467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.258544 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.258557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.258576 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.258589 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.264085 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: E1125 12:14:49.275031 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.278052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.278090 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.278101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.278119 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.278130 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.284077 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: E1125 12:14:49.289193 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.315788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.315855 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.315871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.315895 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.315910 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: E1125 12:14:49.329087 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: E1125 12:14:49.329261 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.331702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.331751 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.331763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.331782 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.331796 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.435111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.435173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.435186 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.435208 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.435227 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.538081 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.538125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.538162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.538178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.538191 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.641481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.641585 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.641608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.641686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.641700 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.744135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.744223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.744265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.744299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.744338 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.846743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.846777 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.846785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.846797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.846806 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.895579 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw"] Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.896434 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.900676 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.900911 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.915695 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.934635 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.950005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.950081 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.950107 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.950144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.950168 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:49Z","lastTransitionTime":"2025-11-25T12:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.953603 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.973462 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.984555 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0399ec58-935d-44d2-8687-88c572bc636f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.984646 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0399ec58-935d-44d2-8687-88c572bc636f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.984758 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcznx\" (UniqueName: \"kubernetes.io/projected/0399ec58-935d-44d2-8687-88c572bc636f-kube-api-access-hcznx\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.984841 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0399ec58-935d-44d2-8687-88c572bc636f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:49 crc kubenswrapper[4688]: I1125 12:14:49.999115 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:49Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.019101 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.027260 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/1.log" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.027984 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/0.log" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.031264 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b" exitCode=1 Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.031295 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.031329 4688 scope.go:117] "RemoveContainer" containerID="8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.033153 4688 scope.go:117] "RemoveContainer" containerID="f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b" Nov 25 12:14:50 crc kubenswrapper[4688]: E1125 12:14:50.034271 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.054334 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.054393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.054450 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.054474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.054490 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.055349 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.077996 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.085881 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0399ec58-935d-44d2-8687-88c572bc636f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.085939 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0399ec58-935d-44d2-8687-88c572bc636f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.086002 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcznx\" (UniqueName: \"kubernetes.io/projected/0399ec58-935d-44d2-8687-88c572bc636f-kube-api-access-hcznx\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.086067 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0399ec58-935d-44d2-8687-88c572bc636f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.086972 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0399ec58-935d-44d2-8687-88c572bc636f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.087016 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0399ec58-935d-44d2-8687-88c572bc636f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.091109 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0399ec58-935d-44d2-8687-88c572bc636f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.096282 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.112087 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcznx\" (UniqueName: \"kubernetes.io/projected/0399ec58-935d-44d2-8687-88c572bc636f-kube-api-access-hcznx\") pod \"ovnkube-control-plane-749d76644c-59snw\" (UID: \"0399ec58-935d-44d2-8687-88c572bc636f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.112082 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.132703 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.145141 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.157118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.157159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.157167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.157182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.157191 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.166580 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.190601 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.203804 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.217420 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.219945 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: W1125 12:14:50.235071 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0399ec58_935d_44d2_8687_88c572bc636f.slice/crio-68a751b3878c1e5a13beeb5170b56722aed4f44cc6413224cdf33a1d67c3afb7 WatchSource:0}: Error finding container 68a751b3878c1e5a13beeb5170b56722aed4f44cc6413224cdf33a1d67c3afb7: Status 404 returned error can't find the container with id 68a751b3878c1e5a13beeb5170b56722aed4f44cc6413224cdf33a1d67c3afb7 Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.236076 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.251188 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.258801 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.258834 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.258848 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.258864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.258874 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.269731 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.283960 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.302769 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.316539 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.331124 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.340287 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.350871 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.361219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.361255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.361267 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.361282 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.361295 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.365749 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.383674 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.397007 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.407873 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.440786 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.454248 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.464349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.464392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.464403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.464419 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.464430 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.466818 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.567339 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.567386 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.567398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.567415 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.567427 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.669181 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.669256 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.669280 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.669310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.669337 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.739421 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.739487 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.739421 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:50 crc kubenswrapper[4688]: E1125 12:14:50.739669 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:50 crc kubenswrapper[4688]: E1125 12:14:50.739750 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:50 crc kubenswrapper[4688]: E1125 12:14:50.739898 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.740587 4688 scope.go:117] "RemoveContainer" containerID="4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.757679 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.769883 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.771158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.771194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.771207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.771223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.771235 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.783228 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.793265 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.811043 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.826062 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.840034 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.850915 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.870607 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.873248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.873274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.873283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.873300 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.873312 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.882475 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.895404 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.908633 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.923382 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.940572 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.952229 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.964053 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.975690 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.975726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.975740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.975756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:50 crc kubenswrapper[4688]: I1125 12:14:50.975770 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:50Z","lastTransitionTime":"2025-11-25T12:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.035884 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.037467 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.037809 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.039209 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/1.log" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.043809 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" event={"ID":"0399ec58-935d-44d2-8687-88c572bc636f","Type":"ContainerStarted","Data":"84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.043842 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" event={"ID":"0399ec58-935d-44d2-8687-88c572bc636f","Type":"ContainerStarted","Data":"d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.043854 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" event={"ID":"0399ec58-935d-44d2-8687-88c572bc636f","Type":"ContainerStarted","Data":"68a751b3878c1e5a13beeb5170b56722aed4f44cc6413224cdf33a1d67c3afb7"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.054571 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.066781 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.078598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.078652 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.078670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.078692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.078707 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.086396 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.100543 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.116334 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.127789 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.139196 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.151017 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.163617 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.174689 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.181079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.181131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.181144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.181157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.181167 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.183860 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.202501 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.213914 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.226959 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.239957 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.250817 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.269861 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.283284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.283323 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.283332 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.283345 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.283356 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.288714 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.300639 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.311749 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.330107 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.344441 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.357363 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.377237 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.386269 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.386307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.386320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.386335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.386346 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.393006 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.395561 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xbqw8"] Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.396235 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:51 crc kubenswrapper[4688]: E1125 12:14:51.396326 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.428178 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.442748 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.455706 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.469129 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.483359 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.488379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.488418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.488429 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.488442 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.488455 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.497461 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.497787 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzqc\" (UniqueName: \"kubernetes.io/projected/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-kube-api-access-dlzqc\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.497832 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.510001 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.520945 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.535059 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.548594 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.558914 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.584896 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.590678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.590736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.590753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.590777 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.590794 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.598614 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.598771 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlzqc\" (UniqueName: \"kubernetes.io/projected/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-kube-api-access-dlzqc\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:51 crc kubenswrapper[4688]: E1125 12:14:51.598935 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:51 crc kubenswrapper[4688]: E1125 12:14:51.599046 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs podName:45273ea2-4a52-4191-a40a-4b4d3b1a12dd nodeName:}" failed. No retries permitted until 2025-11-25 12:14:52.099017267 +0000 UTC m=+42.208646165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs") pod "network-metrics-daemon-xbqw8" (UID: "45273ea2-4a52-4191-a40a-4b4d3b1a12dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.600743 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.613056 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.622185 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlzqc\" (UniqueName: \"kubernetes.io/projected/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-kube-api-access-dlzqc\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.634461 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.647359 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.666724 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.676842 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.689552 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.693213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.693316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.693339 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.693361 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.693374 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.707670 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.750591 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.796469 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.796511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.796534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.796551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.796563 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.807622 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.830067 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.874793 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.899934 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.899991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.900009 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.900034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:51 crc kubenswrapper[4688]: I1125 12:14:51.900051 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:51Z","lastTransitionTime":"2025-11-25T12:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.003795 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.003850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.003863 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.003884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.003901 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.102724 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:52 crc kubenswrapper[4688]: E1125 12:14:52.102894 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:52 crc kubenswrapper[4688]: E1125 12:14:52.102997 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs podName:45273ea2-4a52-4191-a40a-4b4d3b1a12dd nodeName:}" failed. No retries permitted until 2025-11-25 12:14:53.102972375 +0000 UTC m=+43.212601323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs") pod "network-metrics-daemon-xbqw8" (UID: "45273ea2-4a52-4191-a40a-4b4d3b1a12dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.107389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.109329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.109377 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.110251 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.110284 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.213063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.213120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.213132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.213149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.213162 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.316545 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.316582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.316589 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.316602 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.316612 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.419559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.419632 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.419655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.419679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.419699 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.522163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.522219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.522235 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.522256 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.522273 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.624589 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.624653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.624666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.624684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.624699 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.728052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.728125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.728143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.728168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.728186 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.739371 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.739492 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.739647 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:52 crc kubenswrapper[4688]: E1125 12:14:52.739504 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:52 crc kubenswrapper[4688]: E1125 12:14:52.739727 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:52 crc kubenswrapper[4688]: E1125 12:14:52.739802 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.831780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.831823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.831835 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.831852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.831863 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.935290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.935353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.935371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.935393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:52 crc kubenswrapper[4688]: I1125 12:14:52.935409 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:52Z","lastTransitionTime":"2025-11-25T12:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.038267 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.038336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.038355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.038385 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.038430 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.114061 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:53 crc kubenswrapper[4688]: E1125 12:14:53.114206 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:53 crc kubenswrapper[4688]: E1125 12:14:53.114275 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs podName:45273ea2-4a52-4191-a40a-4b4d3b1a12dd nodeName:}" failed. No retries permitted until 2025-11-25 12:14:55.114257591 +0000 UTC m=+45.223886459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs") pod "network-metrics-daemon-xbqw8" (UID: "45273ea2-4a52-4191-a40a-4b4d3b1a12dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.141314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.141372 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.141383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.141400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.141413 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.244907 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.244995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.245031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.245063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.245087 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.348199 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.348707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.348718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.348737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.348749 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.451644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.452084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.452314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.452570 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.452767 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.556327 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.556380 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.556395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.556412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.556425 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.658910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.658974 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.658992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.659023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.659042 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.739183 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:53 crc kubenswrapper[4688]: E1125 12:14:53.739366 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.760900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.760948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.760959 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.760976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.760988 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.864255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.864388 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.864406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.864434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.864452 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.967922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.967974 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.967987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.968007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:53 crc kubenswrapper[4688]: I1125 12:14:53.968020 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:53Z","lastTransitionTime":"2025-11-25T12:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.070434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.070506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.070535 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.070558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.070583 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.174269 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.174399 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.174418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.174446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.174470 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.276491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.276546 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.276554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.276566 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.276575 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.379615 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.379676 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.379690 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.379710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.379725 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.482171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.482219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.482230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.482244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.482255 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.585429 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.585491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.585504 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.585554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.585571 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.688705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.688747 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.688759 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.688774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.688782 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.739833 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:54 crc kubenswrapper[4688]: E1125 12:14:54.740018 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.740111 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.740116 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:54 crc kubenswrapper[4688]: E1125 12:14:54.740241 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:54 crc kubenswrapper[4688]: E1125 12:14:54.740340 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.791693 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.791749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.791768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.791791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.791809 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.895781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.895876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.895895 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.895949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:54 crc kubenswrapper[4688]: I1125 12:14:54.895968 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:54Z","lastTransitionTime":"2025-11-25T12:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.000073 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.000152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.000174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.000202 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.000224 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.102940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.102979 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.102990 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.103004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.103016 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.160330 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:55 crc kubenswrapper[4688]: E1125 12:14:55.160500 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:55 crc kubenswrapper[4688]: E1125 12:14:55.160594 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs podName:45273ea2-4a52-4191-a40a-4b4d3b1a12dd nodeName:}" failed. No retries permitted until 2025-11-25 12:14:59.160577305 +0000 UTC m=+49.270206173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs") pod "network-metrics-daemon-xbqw8" (UID: "45273ea2-4a52-4191-a40a-4b4d3b1a12dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.206151 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.206298 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.206329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.206362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.206385 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.310185 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.310251 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.310270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.310295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.310314 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.414214 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.414281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.414299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.414322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.414340 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.517244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.517346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.517366 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.517395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.517414 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.621091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.621160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.621180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.621219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.621238 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.725330 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.725377 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.725390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.725405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.725418 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.738832 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:55 crc kubenswrapper[4688]: E1125 12:14:55.739000 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.829657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.829729 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.829752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.829785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.829810 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.933250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.933319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.933340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.933365 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:55 crc kubenswrapper[4688]: I1125 12:14:55.933411 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:55Z","lastTransitionTime":"2025-11-25T12:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.036950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.037016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.037038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.037064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.037092 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.139488 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.139569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.139588 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.139612 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.139630 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.243146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.243195 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.243212 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.243237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.243254 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.346024 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.346077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.346091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.346109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.346124 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.448558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.448608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.448620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.448636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.448649 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.551319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.551376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.551387 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.551404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.551416 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.653404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.653474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.653493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.653517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.653571 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.738951 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.739019 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:56 crc kubenswrapper[4688]: E1125 12:14:56.739131 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.739157 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:56 crc kubenswrapper[4688]: E1125 12:14:56.739304 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:56 crc kubenswrapper[4688]: E1125 12:14:56.739405 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.757001 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.757054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.757091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.757120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.757143 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.859703 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.859768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.859781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.859799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.859812 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.961849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.961937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.961953 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.962014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:56 crc kubenswrapper[4688]: I1125 12:14:56.962039 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:56Z","lastTransitionTime":"2025-11-25T12:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.065556 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.065630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.065670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.065689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.065702 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.170180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.170242 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.170256 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.170273 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.170285 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.273796 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.273883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.273905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.273940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.274004 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.376468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.376517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.376550 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.376569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.376585 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.492657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.492706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.492714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.492728 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.492740 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.596780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.596836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.596846 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.596867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.596881 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.700121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.700164 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.700173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.700185 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.700196 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.739698 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:57 crc kubenswrapper[4688]: E1125 12:14:57.739856 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.803031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.803058 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.803066 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.803079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.803088 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.906432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.906496 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.906506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.906547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:57 crc kubenswrapper[4688]: I1125 12:14:57.906558 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:57Z","lastTransitionTime":"2025-11-25T12:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.010264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.010324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.010340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.010358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.010374 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.113980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.114020 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.114050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.114066 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.114075 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.217123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.217165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.217176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.217191 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.217200 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.320465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.320547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.320566 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.320586 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.320602 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.424210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.424300 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.424314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.424331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.424344 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.526806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.526849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.526861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.526885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.526896 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.629493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.629551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.629561 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.629575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.629590 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.732235 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.732294 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.732307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.732326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.732341 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.739635 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.739646 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.739717 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:14:58 crc kubenswrapper[4688]: E1125 12:14:58.739862 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:14:58 crc kubenswrapper[4688]: E1125 12:14:58.739940 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:14:58 crc kubenswrapper[4688]: E1125 12:14:58.740020 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.834916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.834967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.834979 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.834997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.835009 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.937951 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.938015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.938042 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.938067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:58 crc kubenswrapper[4688]: I1125 12:14:58.938086 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:58Z","lastTransitionTime":"2025-11-25T12:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.040922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.040994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.041012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.041038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.041055 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.144031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.144090 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.144106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.144131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.144149 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.201897 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.202190 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.202296 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs podName:45273ea2-4a52-4191-a40a-4b4d3b1a12dd nodeName:}" failed. No retries permitted until 2025-11-25 12:15:07.202265841 +0000 UTC m=+57.311894749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs") pod "network-metrics-daemon-xbqw8" (UID: "45273ea2-4a52-4191-a40a-4b4d3b1a12dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.247778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.247878 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.247904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.247935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.247959 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.351905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.351944 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.351952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.351965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.351974 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.446982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.447038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.447049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.447074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.447088 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.466584 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:59Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.471305 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.471336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.471344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.471356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.471365 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.486851 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:59Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.492073 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.492136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.492155 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.492179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.492195 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.511438 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:59Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.515763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.515812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.515823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.515840 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.515853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.534556 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:59Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.539596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.539634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.539644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.539660 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.539672 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.554472 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:14:59Z is after 2025-08-24T17:21:41Z" Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.554660 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.556296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.556375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.556390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.556414 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.556427 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.659940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.660013 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.660036 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.660064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.660088 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.739577 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:14:59 crc kubenswrapper[4688]: E1125 12:14:59.739790 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.763320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.763363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.763375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.763394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.763407 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.866614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.866659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.866670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.866686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.866698 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.969706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.969778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.969796 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.969823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:14:59 crc kubenswrapper[4688]: I1125 12:14:59.969843 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:14:59Z","lastTransitionTime":"2025-11-25T12:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.072613 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.072649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.072656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.072670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.072680 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.174600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.174646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.174655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.174669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.174678 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.277788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.277864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.277883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.277908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.277928 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.381444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.381487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.381497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.381512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.381536 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.484301 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.484385 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.484399 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.484420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.484433 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.586503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.586569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.586581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.586597 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.586607 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.608020 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.616745 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.624419 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.646176 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.657700 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.670244 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.685753 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.689581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.689649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.689667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.689690 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.689706 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.700191 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.717737 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.732593 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.739541 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.739727 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:00 crc kubenswrapper[4688]: E1125 12:15:00.739899 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.739927 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:00 crc kubenswrapper[4688]: E1125 12:15:00.740033 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:00 crc kubenswrapper[4688]: E1125 12:15:00.740095 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.756382 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.770789 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.787757 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.793195 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.793237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.793247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.793261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.793273 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.804801 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.818040 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.831216 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.846204 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.860249 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.890002 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.896464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.896516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.896552 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.896572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.896583 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.903213 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.921070 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.933978 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.949295 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.961946 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.984928 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.999901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.999966 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:00 crc kubenswrapper[4688]: I1125 12:15:00.999984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.000010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.000033 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:00Z","lastTransitionTime":"2025-11-25T12:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.001793 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:00Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.020553 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.034431 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.048549 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.063771 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.076805 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.089126 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.102859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.102913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.102927 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.102948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.102962 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.104356 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.121902 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.138935 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.151741 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.176291 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f6901e9013404d7302bfa0bd40ffcbdf329b496437090af8cc8fc0fe8378dc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:47Z\\\",\\\"message\\\":\\\"bernetes for network=default\\\\nI1125 12:14:47.403575 5977 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:47.403583 5977 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:01Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.206798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.206861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.206873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.206895 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.206909 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.310140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.310203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.310216 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.310238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.310250 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.413177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.413253 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.413269 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.413290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.413307 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.515954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.516008 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.516021 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.516037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.516050 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.619798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.619868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.619882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.619906 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.619922 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.723197 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.723249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.723262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.723277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.723288 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.739127 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:01 crc kubenswrapper[4688]: E1125 12:15:01.739359 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.826713 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.826804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.826832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.826868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.826899 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.930766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.930839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.930856 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.930885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:01 crc kubenswrapper[4688]: I1125 12:15:01.930906 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:01Z","lastTransitionTime":"2025-11-25T12:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.034490 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.034606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.034626 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.034656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.034720 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.138032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.138099 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.138115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.138139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.138156 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.240720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.240763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.240773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.240788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.240799 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.343968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.344030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.344039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.344061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.344074 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.447322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.447382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.447397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.447418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.447434 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.550393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.550460 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.550472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.550490 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.550502 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.652751 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.652798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.652807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.652819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.652826 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.739079 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.739155 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:02 crc kubenswrapper[4688]: E1125 12:15:02.739208 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:02 crc kubenswrapper[4688]: E1125 12:15:02.739378 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.739404 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:02 crc kubenswrapper[4688]: E1125 12:15:02.739687 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.754996 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.755047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.755076 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.755097 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.755109 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.858472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.858546 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.858557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.858574 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.858595 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.962582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.962627 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.962640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.962845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:02 crc kubenswrapper[4688]: I1125 12:15:02.962857 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:02Z","lastTransitionTime":"2025-11-25T12:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.065342 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.065400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.065417 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.065441 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.065458 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.168806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.168871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.168887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.168914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.168933 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.271819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.271908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.271922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.271937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.271949 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.374778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.374824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.374838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.374857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.374870 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.477573 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.477622 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.477645 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.477707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.477723 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.546496 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.547876 4688 scope.go:117] "RemoveContainer" containerID="f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.566396 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.581159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.581226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.581241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.581267 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.581286 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.586260 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.600429 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.617326 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.628977 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.640882 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.660194 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.674508 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.683162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.683202 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.683215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.683232 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.683244 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.702311 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.716471 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.730756 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.739508 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:03 crc kubenswrapper[4688]: E1125 12:15:03.739645 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.743138 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.757701 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.768013 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.779766 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.785705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.785750 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.785766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.785786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.785803 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.792343 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.803345 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.824936 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:03Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.888967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.889026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.889039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.889057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.889068 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.991596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.991640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.991653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.991672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:03 crc kubenswrapper[4688]: I1125 12:15:03.991684 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:03Z","lastTransitionTime":"2025-11-25T12:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.087357 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/1.log" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.090763 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.091438 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.093618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.093666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.093678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.093695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.093707 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.116948 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.133470 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.149015 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.163091 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.177054 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.188130 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.196023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.196063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.196075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.196094 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.196107 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.202815 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.218113 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.229538 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.244755 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.280778 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.299002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.299044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.299056 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.299074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.299086 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.307545 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.327480 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.342952 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.356260 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.371122 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.394385 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.401925 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.401987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.401997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.402016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.402027 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.411492 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.459394 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.459545 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.459625 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:15:36.459595971 +0000 UTC m=+86.569224839 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.459653 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.459694 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.459709 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:15:36.459693533 +0000 UTC m=+86.569322401 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.459765 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.459804 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:15:36.459795756 +0000 UTC m=+86.569424684 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.503915 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.503953 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.503962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.503974 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.503985 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.509858 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.524813 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.539267 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.555403 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.560317 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.560420 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.560563 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.560601 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.560604 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.560615 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.560627 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.560642 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.560706 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 12:15:36.560666124 +0000 UTC m=+86.670295052 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.560727 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 12:15:36.560719445 +0000 UTC m=+86.670348313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.572509 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.594601 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.605893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.605938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.605950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.605966 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.605977 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.606383 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.620860 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.635305 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.649908 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.664712 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.685487 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.699642 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.711824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.712174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.712245 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.712311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.712372 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.717143 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.732120 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.739829 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.739825 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.739978 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.740112 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.739824 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:04 crc kubenswrapper[4688]: E1125 12:15:04.740230 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.747545 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.760845 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.771337 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.783833 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:04Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.814797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.814839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.814850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.814864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.814873 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.918707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.918772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.918787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.918814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:04 crc kubenswrapper[4688]: I1125 12:15:04.918831 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:04Z","lastTransitionTime":"2025-11-25T12:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.021978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.022015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.022027 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.022042 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.022052 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.098385 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/2.log" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.099142 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/1.log" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.104048 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2" exitCode=1 Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.104112 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.104162 4688 scope.go:117] "RemoveContainer" containerID="f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.105044 4688 scope.go:117] "RemoveContainer" containerID="3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2" Nov 25 12:15:05 crc kubenswrapper[4688]: E1125 12:15:05.105285 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.124511 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.126186 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.126219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.126230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.126249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.126264 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.144258 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.160050 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.180793 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.198338 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.223344 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.228223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.228261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.228276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.228295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.228310 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.241653 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.257783 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.274553 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.289756 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.307205 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.319749 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.331408 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.331433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.331441 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.331456 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.331468 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.334231 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.348792 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.362992 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.376213 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.389389 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.409685 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6a20504b9de7ca08a0159ab66f424f9d1f3edabf47969322db88e2d1fb71e1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:14:48.942599 6122 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1125 12:14:48.942629 6122 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1125 12:14:48.942648 6122 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1125 12:14:48.942710 6122 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 12:14:48.942747 6122 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 12:14:48.943216 6122 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 12:14:48.943306 6122 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 12:14:48.943344 6122 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:14:48.943386 6122 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:14:48.943464 6122 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:05Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.435301 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.435355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.435373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.435394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.435406 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.538647 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.538723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.538737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.538763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.538778 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.642715 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.642779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.642795 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.642820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.642835 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.739901 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:05 crc kubenswrapper[4688]: E1125 12:15:05.740078 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.745940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.745984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.745996 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.746014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.746025 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.849352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.849409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.849433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.849458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.849474 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.956136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.956608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.956797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.956974 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:05 crc kubenswrapper[4688]: I1125 12:15:05.957217 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:05Z","lastTransitionTime":"2025-11-25T12:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.060328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.060392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.060407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.060432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.060450 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.110713 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/2.log" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.115392 4688 scope.go:117] "RemoveContainer" containerID="3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2" Nov 25 12:15:06 crc kubenswrapper[4688]: E1125 12:15:06.115652 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.150224 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.164118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.164160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.164171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.164202 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.164215 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.167750 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.189139 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.202434 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.219735 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.235240 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.247672 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.262878 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.266772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.266828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.266839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.266851 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.266860 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.278160 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.289922 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.300376 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.320700 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.340031 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.357334 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.369816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.369861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.369875 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.369891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.369902 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.378815 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.394365 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.411473 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.425191 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:06Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.472977 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.473019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.473029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.473046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.473058 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.576392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.576463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.576486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.576559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.576585 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.679127 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.679192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.679212 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.679235 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.679254 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.739378 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.739435 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:06 crc kubenswrapper[4688]: E1125 12:15:06.739674 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.739742 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:06 crc kubenswrapper[4688]: E1125 12:15:06.739906 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:06 crc kubenswrapper[4688]: E1125 12:15:06.740034 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.782382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.782797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.782963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.783103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.783248 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.885903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.885951 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.885962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.885980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.885994 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.989455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.989508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.989557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.989585 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:06 crc kubenswrapper[4688]: I1125 12:15:06.989602 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:06Z","lastTransitionTime":"2025-11-25T12:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.092011 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.092056 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.092067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.092082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.092094 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.195285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.195339 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.195364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.195386 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.195400 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.292635 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:07 crc kubenswrapper[4688]: E1125 12:15:07.292983 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:15:07 crc kubenswrapper[4688]: E1125 12:15:07.293142 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs podName:45273ea2-4a52-4191-a40a-4b4d3b1a12dd nodeName:}" failed. No retries permitted until 2025-11-25 12:15:23.293102725 +0000 UTC m=+73.402731663 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs") pod "network-metrics-daemon-xbqw8" (UID: "45273ea2-4a52-4191-a40a-4b4d3b1a12dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.298806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.298850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.298868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.298917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.298968 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.402235 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.402292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.402309 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.402333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.402351 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.549710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.549824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.549851 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.549882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.549903 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.653106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.653169 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.653187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.653211 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.653230 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.739595 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:07 crc kubenswrapper[4688]: E1125 12:15:07.739803 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.757015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.757070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.757089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.757113 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.757132 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.860515 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.860621 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.860639 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.860664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.860681 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.963595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.963640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.963650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.963666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:07 crc kubenswrapper[4688]: I1125 12:15:07.963679 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:07Z","lastTransitionTime":"2025-11-25T12:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.066807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.066867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.066885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.066912 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.066928 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.170890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.170956 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.170976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.171005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.171024 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.274053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.274103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.274120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.274145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.274159 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.377419 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.377485 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.377504 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.377557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.377578 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.481060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.481141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.481181 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.481220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.481249 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.585160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.585231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.585251 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.585276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.585290 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.689116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.689179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.689205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.689228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.689247 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.739877 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.740002 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.740043 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:08 crc kubenswrapper[4688]: E1125 12:15:08.740166 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:08 crc kubenswrapper[4688]: E1125 12:15:08.740272 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:08 crc kubenswrapper[4688]: E1125 12:15:08.740324 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.792209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.792292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.792310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.792713 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.792737 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.895634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.895705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.895722 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.895742 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.895758 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.998434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.998596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.998620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.998652 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:08 crc kubenswrapper[4688]: I1125 12:15:08.998679 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:08Z","lastTransitionTime":"2025-11-25T12:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.102407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.102471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.102486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.102507 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.102574 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.205328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.205373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.205385 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.205402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.205414 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.308245 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.308335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.308431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.308469 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.308494 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.411204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.411266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.411283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.411306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.411323 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.515843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.515978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.516023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.516063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.516110 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.607291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.607373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.607390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.607417 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.607438 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: E1125 12:15:09.629562 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:09Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.634985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.635061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.635084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.635117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.635148 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: E1125 12:15:09.658982 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:09Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.665159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.665244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.665267 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.665293 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.665314 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: E1125 12:15:09.687030 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:09Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.693865 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.693993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.694082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.694179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.694272 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: E1125 12:15:09.717874 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:09Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.723807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.723875 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.723888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.723906 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.723919 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: E1125 12:15:09.738368 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:09Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:09 crc kubenswrapper[4688]: E1125 12:15:09.738565 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.738889 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:09 crc kubenswrapper[4688]: E1125 12:15:09.739145 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.741093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.741128 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.741141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.741160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.741173 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.843406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.843463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.843479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.843502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.843518 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.946370 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.946444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.946462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.946487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:09 crc kubenswrapper[4688]: I1125 12:15:09.946503 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:09Z","lastTransitionTime":"2025-11-25T12:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.049381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.049455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.049477 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.049510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.049573 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.151738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.151809 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.151831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.151861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.151883 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.255351 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.255436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.255461 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.255492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.255516 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.358737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.358849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.358862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.358879 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.358891 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.461625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.461733 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.461750 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.461772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.461789 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.564878 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.564930 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.564941 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.564957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.564969 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.668227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.668599 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.668682 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.668777 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.668853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.739141 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.739306 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:10 crc kubenswrapper[4688]: E1125 12:15:10.739401 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:10 crc kubenswrapper[4688]: E1125 12:15:10.739575 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.739748 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:10 crc kubenswrapper[4688]: E1125 12:15:10.739974 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.761491 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.772354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.772410 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.772428 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.772459 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.772479 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.775810 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.790322 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.807253 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.820836 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.835924 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.851509 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.873615 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.874631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.874679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.874715 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.874733 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.874747 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.890241 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.904108 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.915164 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.925922 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.936507 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.948275 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.965205 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.977789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.977870 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.977894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.977922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.977941 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:10Z","lastTransitionTime":"2025-11-25T12:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:10 crc kubenswrapper[4688]: I1125 12:15:10.983802 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:10Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.013769 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:11Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.039915 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:11Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.080491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.080559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.080572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.080590 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.080603 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.183733 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.183800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.183814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.183838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.183853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.286486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.286582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.286598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.286622 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.286637 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.389449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.389576 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.389597 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.389628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.389646 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.493319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.493430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.493450 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.493479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.493497 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.596283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.596326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.596337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.596353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.596362 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.699027 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.699084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.699094 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.699109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.699138 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.739892 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:11 crc kubenswrapper[4688]: E1125 12:15:11.740066 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.802566 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.802620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.802643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.802673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.802699 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.905130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.905204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.905225 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.905245 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:11 crc kubenswrapper[4688]: I1125 12:15:11.905260 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:11Z","lastTransitionTime":"2025-11-25T12:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.008454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.008558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.008581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.008606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.008626 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.112185 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.112282 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.112303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.112329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.112347 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.215519 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.215595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.215613 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.215631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.215644 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.318739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.318811 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.318832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.318859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.318889 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.422048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.422142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.422162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.422188 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.422205 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.524518 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.524618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.524635 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.524666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.524701 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.628607 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.628656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.628667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.628683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.628694 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.731613 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.731698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.731733 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.731760 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.731781 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.739291 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.739404 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:12 crc kubenswrapper[4688]: E1125 12:15:12.739487 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.739500 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:12 crc kubenswrapper[4688]: E1125 12:15:12.739626 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:12 crc kubenswrapper[4688]: E1125 12:15:12.739703 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.835618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.835717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.835749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.835780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.835801 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.939477 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.939600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.939622 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.939653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:12 crc kubenswrapper[4688]: I1125 12:15:12.939674 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:12Z","lastTransitionTime":"2025-11-25T12:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.043313 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.043364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.043376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.043396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.043408 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.147227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.147338 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.147371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.147415 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.147450 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.251074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.251517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.251735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.251881 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.252022 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.355654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.356005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.356143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.356279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.356409 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.460475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.460594 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.460620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.460654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.460673 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.563969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.564047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.564071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.564101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.564121 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.667993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.668043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.668063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.668086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.668104 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.738823 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:13 crc kubenswrapper[4688]: E1125 12:15:13.739021 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.771800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.771847 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.771859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.771875 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.771887 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.874290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.874356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.874379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.874395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.874408 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.977206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.977261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.977275 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.977296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:13 crc kubenswrapper[4688]: I1125 12:15:13.977312 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:13Z","lastTransitionTime":"2025-11-25T12:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.080854 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.080914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.080936 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.080960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.080978 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.184315 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.184387 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.184405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.184433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.184452 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.287581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.287636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.287653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.287680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.287703 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.391312 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.391917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.392211 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.392397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.392630 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.496076 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.496646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.496878 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.497102 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.497312 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.599674 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.599785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.599808 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.599835 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.599861 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.702735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.702781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.702790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.702805 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.702814 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.739302 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.739418 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.739441 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:14 crc kubenswrapper[4688]: E1125 12:15:14.739611 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:14 crc kubenswrapper[4688]: E1125 12:15:14.739734 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:14 crc kubenswrapper[4688]: E1125 12:15:14.739871 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.805334 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.805397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.805418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.805433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.805442 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.908162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.908197 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.908206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.908221 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:14 crc kubenswrapper[4688]: I1125 12:15:14.908230 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:14Z","lastTransitionTime":"2025-11-25T12:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.011274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.011331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.011340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.011354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.011364 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.114221 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.114262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.114274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.114288 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.114299 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.216929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.216967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.216991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.217009 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.217021 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.319412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.319454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.319467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.319483 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.319492 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.421997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.422035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.422045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.422063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.422072 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.524807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.524860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.524871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.524888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.524899 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.628040 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.628082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.628090 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.628105 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.628119 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.730967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.731023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.731032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.731050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.731059 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.739377 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:15 crc kubenswrapper[4688]: E1125 12:15:15.739569 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.832851 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.832895 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.832906 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.832924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.832934 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.935098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.935157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.935166 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.935183 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:15 crc kubenswrapper[4688]: I1125 12:15:15.935193 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:15Z","lastTransitionTime":"2025-11-25T12:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.041710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.041761 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.041772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.041788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.041799 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.144089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.144126 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.144136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.144152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.144164 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.246802 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.246838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.246847 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.246861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.246870 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.349270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.349315 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.349324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.349337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.349348 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.452058 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.452091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.452102 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.452117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.452128 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.555047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.555077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.555086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.555116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.555125 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.659292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.659328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.659339 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.659356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.659366 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.739806 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.739810 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.739969 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:16 crc kubenswrapper[4688]: E1125 12:15:16.740097 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:16 crc kubenswrapper[4688]: E1125 12:15:16.740260 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:16 crc kubenswrapper[4688]: E1125 12:15:16.740475 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.762736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.762820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.762839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.763271 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.763327 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.866437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.866498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.866512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.866555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.866572 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.969548 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.969618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.969629 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.969650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:16 crc kubenswrapper[4688]: I1125 12:15:16.969663 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:16Z","lastTransitionTime":"2025-11-25T12:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.071598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.071647 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.071659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.071675 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.071686 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.173503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.173583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.173594 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.173612 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.173624 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.276449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.276507 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.276555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.276579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.276610 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.378986 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.379067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.379092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.379108 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.379118 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.482172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.482251 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.482274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.482304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.482330 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.586753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.586795 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.586804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.586819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.586828 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.690491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.690618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.690662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.690703 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.690816 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.739091 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:17 crc kubenswrapper[4688]: E1125 12:15:17.739293 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.794541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.794589 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.794601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.794617 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.794631 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.897719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.898084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.898228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.898257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:17 crc kubenswrapper[4688]: I1125 12:15:17.898277 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:17Z","lastTransitionTime":"2025-11-25T12:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.001739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.001811 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.001826 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.001843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.001879 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.105938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.105986 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.105996 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.106016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.106027 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.209209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.209258 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.209270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.209285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.209296 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.314975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.315123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.315147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.315174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.315226 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.418645 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.418704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.418718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.418735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.418748 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.522203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.522595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.522615 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.522638 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.522654 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.625330 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.625365 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.625375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.625388 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.625397 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.728927 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.728986 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.729004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.729027 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.729045 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.739343 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.739415 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:18 crc kubenswrapper[4688]: E1125 12:15:18.739465 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.739649 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:18 crc kubenswrapper[4688]: E1125 12:15:18.739658 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:18 crc kubenswrapper[4688]: E1125 12:15:18.739723 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.831902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.831945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.831956 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.831972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.831980 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.935662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.936041 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.936187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.936551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:18 crc kubenswrapper[4688]: I1125 12:15:18.936689 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:18Z","lastTransitionTime":"2025-11-25T12:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.040705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.040756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.040769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.040791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.040802 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.143813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.143873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.143892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.143915 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.143934 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.246730 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.246790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.246799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.246818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.246829 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.349624 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.349677 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.349688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.349701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.349710 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.452116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.452181 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.452198 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.452220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.452236 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.554959 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.555018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.555029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.555050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.555065 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.657606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.657653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.657665 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.657682 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.657694 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.739214 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:19 crc kubenswrapper[4688]: E1125 12:15:19.739488 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.760740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.760793 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.760804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.760822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.760832 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.863108 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.863148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.863159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.863177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.863189 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.966067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.966114 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.966125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.966142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:19 crc kubenswrapper[4688]: I1125 12:15:19.966154 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:19Z","lastTransitionTime":"2025-11-25T12:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.068492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.068592 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.068611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.068635 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.068652 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.093177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.093231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.093244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.093261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.093273 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.107227 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.112227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.112290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.112306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.112324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.112360 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.123386 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.127766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.127794 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.127825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.127838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.127848 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.141296 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.146303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.146343 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.146353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.146372 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.146387 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.159654 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.166036 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.166086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.166101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.166121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.166141 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.185592 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.185818 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.187715 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.187753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.187765 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.187791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.187812 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.290671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.290760 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.290828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.290862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.290884 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.394191 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.394236 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.394247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.394266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.394280 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.496941 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.496981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.496992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.497008 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.497021 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.600340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.600413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.600438 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.600469 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.600491 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.703374 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.703427 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.703446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.703466 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.703481 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.739777 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.739809 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.740685 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.740807 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.740873 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.741431 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.741688 4688 scope.go:117] "RemoveContainer" containerID="3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2" Nov 25 12:15:20 crc kubenswrapper[4688]: E1125 12:15:20.741849 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.762338 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.777748 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.791195 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.805370 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.805425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.805435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.805451 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.805480 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.806984 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.819334 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.828750 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.839216 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.854293 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.868316 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.892955 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.907637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.907685 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.907696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.907711 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.907721 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:20Z","lastTransitionTime":"2025-11-25T12:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.921791 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.942736 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.961205 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.975665 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:20 crc kubenswrapper[4688]: I1125 12:15:20.989111 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:20Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.004196 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:21Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.009798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.009824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.009831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.009845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.009854 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.022317 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:21Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.036915 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:21Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.113409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.113447 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.113456 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.113471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.113481 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.216708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.216768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.216785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.216806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.216821 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.318986 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.319239 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.319538 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.319649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.319736 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.423488 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.423585 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.423604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.423633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.423659 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.526847 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.526901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.526915 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.526936 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.526953 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.628752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.628800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.628816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.628837 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.628850 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.732654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.732719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.732745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.732774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.732794 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.739209 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:21 crc kubenswrapper[4688]: E1125 12:15:21.739664 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.835875 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.835983 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.836000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.836023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.836040 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.938556 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.938587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.938596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.938611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:21 crc kubenswrapper[4688]: I1125 12:15:21.938619 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:21Z","lastTransitionTime":"2025-11-25T12:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.042043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.042228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.042484 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.044165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.045561 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.148735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.148788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.148812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.148841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.148862 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.253385 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.253426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.253439 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.253453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.253462 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.355612 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.355867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.355937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.355998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.356116 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.458628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.458918 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.458985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.459080 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.459159 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.561541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.561795 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.561862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.561932 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.562000 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.663995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.664204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.664264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.664362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.664448 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.739332 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.739413 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:22 crc kubenswrapper[4688]: E1125 12:15:22.739587 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:22 crc kubenswrapper[4688]: E1125 12:15:22.739681 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.740080 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:22 crc kubenswrapper[4688]: E1125 12:15:22.740254 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.767170 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.767318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.767403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.767616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.767791 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.870277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.870325 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.870338 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.870359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.870372 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.972497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.972556 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.972565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.972579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:22 crc kubenswrapper[4688]: I1125 12:15:22.972590 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:22Z","lastTransitionTime":"2025-11-25T12:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.074668 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.075053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.075190 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.075365 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.075484 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.177661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.178506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.178773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.178974 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.179179 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.282381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.282443 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.282453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.282469 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.282479 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.379301 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:23 crc kubenswrapper[4688]: E1125 12:15:23.379479 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:15:23 crc kubenswrapper[4688]: E1125 12:15:23.379556 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs podName:45273ea2-4a52-4191-a40a-4b4d3b1a12dd nodeName:}" failed. No retries permitted until 2025-11-25 12:15:55.379538553 +0000 UTC m=+105.489167421 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs") pod "network-metrics-daemon-xbqw8" (UID: "45273ea2-4a52-4191-a40a-4b4d3b1a12dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.385050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.385077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.385085 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.385098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.385106 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.491872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.491922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.491935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.491953 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.491966 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.594123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.594172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.594181 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.594195 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.594205 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.696823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.696887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.696902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.696925 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.696940 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.739466 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:23 crc kubenswrapper[4688]: E1125 12:15:23.739626 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.799546 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.799579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.799589 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.799607 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.799620 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.902408 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.902673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.902758 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.902844 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:23 crc kubenswrapper[4688]: I1125 12:15:23.902919 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:23Z","lastTransitionTime":"2025-11-25T12:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.006200 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.006248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.006259 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.006276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.006287 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.108317 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.108736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.108884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.108982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.109096 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.212125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.212187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.212195 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.212213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.212226 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.314748 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.314791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.314806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.314825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.314838 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.417649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.418495 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.418643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.418746 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.418828 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.521745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.521789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.521801 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.521818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.521831 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.624495 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.624563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.624579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.624593 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.624601 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.727379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.727409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.727420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.727436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.727448 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.739787 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.739879 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:24 crc kubenswrapper[4688]: E1125 12:15:24.739933 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:24 crc kubenswrapper[4688]: E1125 12:15:24.740023 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.740130 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:24 crc kubenswrapper[4688]: E1125 12:15:24.740297 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.829702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.829734 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.829741 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.829753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.829762 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.931787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.931836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.931852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.931895 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:24 crc kubenswrapper[4688]: I1125 12:15:24.931910 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:24Z","lastTransitionTime":"2025-11-25T12:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.033823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.033858 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.033866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.033882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.033892 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.137398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.137452 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.137464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.137482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.137721 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.180060 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/0.log" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.180119 4688 generic.go:334] "Generic (PLEG): container finished" podID="6c3971fa-9838-436e-97b1-be050abea83a" containerID="90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23" exitCode=1 Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.180154 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlfw5" event={"ID":"6c3971fa-9838-436e-97b1-be050abea83a","Type":"ContainerDied","Data":"90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.180572 4688 scope.go:117] "RemoveContainer" containerID="90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.196018 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.211644 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:24Z\\\",\\\"message\\\":\\\"2025-11-25T12:14:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e\\\\n2025-11-25T12:14:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e to /host/opt/cni/bin/\\\\n2025-11-25T12:14:39Z [verbose] multus-daemon started\\\\n2025-11-25T12:14:39Z [verbose] Readiness Indicator file check\\\\n2025-11-25T12:15:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.227171 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.238811 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.240952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.241046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.241061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.241076 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.241087 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.252411 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.265552 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.277886 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.289863 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.301090 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.324276 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.341862 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.344207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.344227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.344235 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.344265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.344275 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.355311 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.367366 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.378170 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.388405 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.408957 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.425382 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.437756 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:25Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.446842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.446866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.446874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.446904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.446913 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.548978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.549409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.549571 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.549708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.549859 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.652619 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.652666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.652679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.652695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.652709 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.739868 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:25 crc kubenswrapper[4688]: E1125 12:15:25.740035 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.755614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.755664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.755680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.755699 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.755719 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.858641 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.858699 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.858716 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.858739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.858758 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.960658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.960694 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.960702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.960714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:25 crc kubenswrapper[4688]: I1125 12:15:25.960724 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:25Z","lastTransitionTime":"2025-11-25T12:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.063884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.063949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.063973 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.064003 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.064024 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.166885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.166945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.166961 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.167026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.167047 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.185857 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/0.log" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.185938 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlfw5" event={"ID":"6c3971fa-9838-436e-97b1-be050abea83a","Type":"ContainerStarted","Data":"b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.201885 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.217319 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:24Z\\\",\\\"message\\\":\\\"2025-11-25T12:14:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e\\\\n2025-11-25T12:14:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e to /host/opt/cni/bin/\\\\n2025-11-25T12:14:39Z [verbose] multus-daemon started\\\\n2025-11-25T12:14:39Z [verbose] Readiness Indicator file check\\\\n2025-11-25T12:15:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.233885 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.245411 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.258375 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.270107 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.270162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.270174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.270194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.270206 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.272684 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.288446 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.307134 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.325905 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.349164 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.365970 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.372329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.372398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.372409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.372477 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.372492 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.380290 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.392799 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.402448 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.414806 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.435020 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.447690 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.460417 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:26Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.475122 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.475160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.475171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.475186 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.475197 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.581355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.581736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.581758 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.581811 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.581824 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.684179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.684283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.684295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.684311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.684321 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.739078 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.739166 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:26 crc kubenswrapper[4688]: E1125 12:15:26.739240 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.739376 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:26 crc kubenswrapper[4688]: E1125 12:15:26.739502 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:26 crc kubenswrapper[4688]: E1125 12:15:26.739709 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.787822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.787879 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.787890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.787907 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.787919 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.890726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.890761 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.890771 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.890791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.890808 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.993806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.993845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.993853 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.993867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:26 crc kubenswrapper[4688]: I1125 12:15:26.993876 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:26Z","lastTransitionTime":"2025-11-25T12:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.096859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.096948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.096967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.096990 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.097010 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.199837 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.199890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.199901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.199919 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.199931 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.303241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.303314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.303333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.303357 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.303375 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.406400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.406448 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.406462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.406482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.406498 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.509967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.510559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.510576 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.510599 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.510614 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.613945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.614007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.614026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.614053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.614073 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.717735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.717801 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.717816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.717842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.717862 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.739386 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:27 crc kubenswrapper[4688]: E1125 12:15:27.739684 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.821558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.821618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.821631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.821650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.821668 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.924479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.924599 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.924620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.924651 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:27 crc kubenswrapper[4688]: I1125 12:15:27.924679 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:27Z","lastTransitionTime":"2025-11-25T12:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.028999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.029072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.029091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.029120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.029141 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.132515 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.132575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.132583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.132599 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.132611 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.235966 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.236029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.236044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.236067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.236084 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.338930 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.339007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.339032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.339064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.339088 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.441580 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.441649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.441664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.441680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.441691 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.544774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.544822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.544836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.544855 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.544877 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.647302 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.647357 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.647374 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.647397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.647412 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.739375 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.739481 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.739440 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:28 crc kubenswrapper[4688]: E1125 12:15:28.739685 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:28 crc kubenswrapper[4688]: E1125 12:15:28.739827 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:28 crc kubenswrapper[4688]: E1125 12:15:28.739918 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.750177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.750227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.750236 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.750250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.750258 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.855306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.855629 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.855738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.855841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.855930 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.959095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.959156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.959178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.959207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:28 crc kubenswrapper[4688]: I1125 12:15:28.959227 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:28Z","lastTransitionTime":"2025-11-25T12:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.062457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.062511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.062568 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.062593 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.062609 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.165568 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.165663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.165686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.165717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.165741 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.268969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.269391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.269640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.269894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.270060 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.373279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.373346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.373357 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.373380 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.373392 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.476540 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.476579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.476587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.476601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.476614 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.578886 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.578926 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.578947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.578963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.578976 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.680813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.680866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.680874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.680886 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.680894 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.739684 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:29 crc kubenswrapper[4688]: E1125 12:15:29.739840 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.782913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.782948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.782960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.782975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.782986 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.885926 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.885968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.885978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.885994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.886006 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.988304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.988376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.988397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.988425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:29 crc kubenswrapper[4688]: I1125 12:15:29.988445 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:29Z","lastTransitionTime":"2025-11-25T12:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.093071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.093141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.093157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.093182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.093199 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.196110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.196165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.196182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.196205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.196218 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.299086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.299140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.299152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.299173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.299186 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.401222 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.401324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.401349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.401383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.401407 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.423603 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.429044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.429098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.429114 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.429136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.429153 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.446275 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.453673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.453714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.453727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.453743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.453755 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.475052 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.480511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.480595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.480683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.480765 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.480805 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.497053 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.501088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.501116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.501127 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.501142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.501154 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.522956 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.523107 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.524912 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.524985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.525012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.525040 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.525064 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.628679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.628743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.628757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.628778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.628794 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.732152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.732220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.732378 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.732409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.732432 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.740011 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.740011 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.740154 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.740417 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.740737 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:30 crc kubenswrapper[4688]: E1125 12:15:30.740976 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.759081 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.777767 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:24Z\\\",\\\"message\\\":\\\"2025-11-25T12:14:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e\\\\n2025-11-25T12:14:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e to /host/opt/cni/bin/\\\\n2025-11-25T12:14:39Z [verbose] multus-daemon started\\\\n2025-11-25T12:14:39Z [verbose] Readiness Indicator file check\\\\n2025-11-25T12:15:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.798438 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.811430 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.821779 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.834484 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.834540 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.834551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.834564 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.834572 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.835412 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.848001 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.862065 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.878124 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.907887 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.926660 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.939625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.939658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.939691 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.939708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.939717 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:30Z","lastTransitionTime":"2025-11-25T12:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.944397 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.959224 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.968718 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:30 crc kubenswrapper[4688]: I1125 12:15:30.990667 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:30Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.032721 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:31Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.042516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.042583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.042595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.042611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.042624 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.046444 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:31Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.057845 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:31Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.144150 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.144194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.144203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.144218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.144228 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.246684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.246719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.246727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.246741 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.246749 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.349636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.349688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.349707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.349732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.349753 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.452579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.452672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.452701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.452736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.453024 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.556260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.556359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.556376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.556391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.556403 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.659158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.659223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.659236 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.659255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.659268 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.739596 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:31 crc kubenswrapper[4688]: E1125 12:15:31.739750 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.762030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.762092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.762105 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.762128 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.762145 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.867023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.867087 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.867105 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.867141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.867157 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.971339 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.971396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.971415 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.971437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:31 crc kubenswrapper[4688]: I1125 12:15:31.971449 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:31Z","lastTransitionTime":"2025-11-25T12:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.074683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.074737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.074757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.074786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.074822 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.177054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.177116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.177139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.177167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.177189 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.279967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.280019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.280034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.280052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.280064 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.382642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.382737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.382757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.382781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.382799 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.485207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.485247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.485276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.485289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.485297 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.587679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.587735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.587743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.587756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.587765 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.690735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.690858 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.690900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.690934 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.690958 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.739899 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.739935 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.739988 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:32 crc kubenswrapper[4688]: E1125 12:15:32.740072 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:32 crc kubenswrapper[4688]: E1125 12:15:32.740200 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:32 crc kubenswrapper[4688]: E1125 12:15:32.740273 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.793941 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.793991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.794008 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.794029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.794048 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.897178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.897240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.897264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.897291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:32 crc kubenswrapper[4688]: I1125 12:15:32.897314 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:32Z","lastTransitionTime":"2025-11-25T12:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.000259 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.000328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.000344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.000367 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.000383 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.103193 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.103268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.103304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.103336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.103356 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.206263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.206338 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.206364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.206391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.206411 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.310276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.310354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.310378 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.310408 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.310429 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.413031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.413096 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.413121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.413149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.413171 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.516517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.516587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.516598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.516615 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.516627 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.619441 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.619506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.619552 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.619581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.619599 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.722667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.722735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.722760 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.722786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.722805 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.739137 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:33 crc kubenswrapper[4688]: E1125 12:15:33.739331 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.826126 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.826196 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.826219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.826250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.826273 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.929054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.929093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.929111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.929128 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:33 crc kubenswrapper[4688]: I1125 12:15:33.929142 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:33Z","lastTransitionTime":"2025-11-25T12:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.031880 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.031935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.031952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.031978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.031995 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.134985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.135037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.135057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.135083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.135102 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.238503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.238648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.238675 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.238706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.238731 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.341901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.341953 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.341975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.341999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.342016 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.445644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.445710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.445744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.445773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.445798 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.548554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.548582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.548590 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.548604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.548614 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.651296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.651357 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.651373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.651394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.651408 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.739796 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.739925 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:34 crc kubenswrapper[4688]: E1125 12:15:34.740048 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.740081 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:34 crc kubenswrapper[4688]: E1125 12:15:34.740278 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:34 crc kubenswrapper[4688]: E1125 12:15:34.740372 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.753887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.753958 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.753982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.754015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.754038 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.856893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.856972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.856986 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.857005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.857017 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.959929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.960010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.960032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.960061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:34 crc kubenswrapper[4688]: I1125 12:15:34.960080 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:34Z","lastTransitionTime":"2025-11-25T12:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.063156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.063213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.063230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.063252 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.063267 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.166366 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.166404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.166414 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.166429 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.166441 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.269369 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.269486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.269511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.269572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.269598 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.372694 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.372773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.372799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.372828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.372852 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.475945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.476007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.476030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.476057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.476077 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.578695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.578772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.578795 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.578823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.578841 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.681585 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.681688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.681699 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.681726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.681743 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.739749 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:35 crc kubenswrapper[4688]: E1125 12:15:35.739970 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.740910 4688 scope.go:117] "RemoveContainer" containerID="3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.784724 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.784797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.784821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.784852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.784877 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.888343 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.888414 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.888431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.888455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.888474 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.992155 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.992221 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.992238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.992261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:35 crc kubenswrapper[4688]: I1125 12:15:35.992275 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:35Z","lastTransitionTime":"2025-11-25T12:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.096075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.096138 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.096156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.096180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.096200 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.199508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.199586 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.199598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.199622 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.199639 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.225740 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/2.log" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.302751 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.302806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.302816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.302840 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.302853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.405352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.405391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.405399 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.405411 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.405420 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.508868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.509227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.509240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.509256 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.509266 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.557632 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.557815 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.557777291 +0000 UTC m=+150.667406209 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.557912 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.557983 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.558182 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.558207 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.558281 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.558257673 +0000 UTC m=+150.667886581 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.558323 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.558297964 +0000 UTC m=+150.667926872 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.611600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.611688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.611716 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.611752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.611774 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.658981 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.659076 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.659211 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.659229 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.659245 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.659317 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.659300358 +0000 UTC m=+150.768929246 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.659354 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.659407 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.659429 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.659511 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.659485472 +0000 UTC m=+150.769114380 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.714517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.714573 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.714584 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.714601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.714612 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.738877 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.738970 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.739085 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.739232 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.739415 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:36 crc kubenswrapper[4688]: E1125 12:15:36.739577 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.817722 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.817756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.817766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.817784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.817800 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.920199 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.920237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.920250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.920268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:36 crc kubenswrapper[4688]: I1125 12:15:36.920281 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:36Z","lastTransitionTime":"2025-11-25T12:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.022785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.022831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.022843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.022861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.022873 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.126913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.127090 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.127172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.127243 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.127305 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.229989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.230052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.230067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.230092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.230108 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.236884 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/2.log" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.239395 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.239859 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.261875 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.274814 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.287111 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.299319 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.312288 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.330810 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.332976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.333135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.333243 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.333337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.333457 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.345818 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.359811 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.374576 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.390584 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.405500 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.420749 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:24Z\\\",\\\"message\\\":\\\"2025-11-25T12:14:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e\\\\n2025-11-25T12:14:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e to /host/opt/cni/bin/\\\\n2025-11-25T12:14:39Z [verbose] multus-daemon started\\\\n2025-11-25T12:14:39Z [verbose] Readiness Indicator file check\\\\n2025-11-25T12:15:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.436038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.436209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.436292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.436373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.436447 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.438663 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.451154 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.474887 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.489234 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.503420 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.517813 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:37Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.539791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.540000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.540106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.540230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.540334 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.643121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.643167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.643176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.643191 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.643200 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.739083 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:37 crc kubenswrapper[4688]: E1125 12:15:37.739507 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.746438 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.746485 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.746498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.746514 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.746550 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.850840 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.850898 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.850908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.850924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.850935 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.953920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.953978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.953992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.954013 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:37 crc kubenswrapper[4688]: I1125 12:15:37.954026 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:37Z","lastTransitionTime":"2025-11-25T12:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.056239 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.056281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.056293 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.056307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.056318 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.158266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.158306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.158316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.158328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.158337 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.244811 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/3.log" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.246059 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/2.log" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.250692 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" exitCode=1 Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.250759 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.250814 4688 scope.go:117] "RemoveContainer" containerID="3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.252307 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:15:38 crc kubenswrapper[4688]: E1125 12:15:38.252573 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.263026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.263081 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.263098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.263120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.263175 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.273603 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.287819 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.300692 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.311837 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.331190 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe5afb12726b27b31eddbd78b18bd624b82eeace3f6522ab7baeef1f971d5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"message\\\":\\\"s:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 12:15:04.502902 6350 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 12:15:04.503250 6350 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:04.503277 6350 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 12:15:04.503280 6350 lb_config.go:1031] Cluster endpoints for openshift-machine-api/machine-api-controllers for network=default are: map[]\\\\nF1125 12:15:04.503331 6350 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:37Z\\\",\\\"message\\\":\\\" 6750 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108300 6750 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108374 6750 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108912 6750 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.110203 6750 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 12:15:37.110279 6750 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 12:15:37.111466 6750 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 12:15:37.111551 6750 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 12:15:37.111634 6750 factory.go:656] Stopping watch factory\\\\nI1125 12:15:37.118820 6750 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 12:15:37.118866 6750 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 12:15:37.118940 6750 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:37.118983 6750 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:15:37.119069 6750 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.344571 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.354978 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.365570 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.365663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.365684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.365708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.365725 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.374213 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.386724 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.398446 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.419062 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.432273 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.444203 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.456986 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.468406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.468489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.468503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.468561 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.468575 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.470856 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:24Z\\\",\\\"message\\\":\\\"2025-11-25T12:14:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e\\\\n2025-11-25T12:14:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e to /host/opt/cni/bin/\\\\n2025-11-25T12:14:39Z [verbose] multus-daemon started\\\\n2025-11-25T12:14:39Z [verbose] Readiness Indicator file check\\\\n2025-11-25T12:15:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.485032 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.498700 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.512597 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:38Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.571392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.571489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.571510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.571574 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.571602 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.674274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.674341 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.674353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.674376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.674390 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.739339 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.739369 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:38 crc kubenswrapper[4688]: E1125 12:15:38.739518 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:38 crc kubenswrapper[4688]: E1125 12:15:38.739616 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.739646 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:38 crc kubenswrapper[4688]: E1125 12:15:38.739906 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.776479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.776511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.776519 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.776554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.776566 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.879002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.879266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.879362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.879462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.879580 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.983205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.983313 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.983326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.983345 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:38 crc kubenswrapper[4688]: I1125 12:15:38.983358 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:38Z","lastTransitionTime":"2025-11-25T12:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.086745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.087289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.087773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.088067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.088272 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.191855 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.191931 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.191948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.191966 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.191978 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.257354 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/3.log" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.262153 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:15:39 crc kubenswrapper[4688]: E1125 12:15:39.262356 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.284881 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.294646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.294701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.294718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.294748 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.294768 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.300104 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.315872 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.331550 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.346402 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:24Z\\\",\\\"message\\\":\\\"2025-11-25T12:14:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e\\\\n2025-11-25T12:14:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e to /host/opt/cni/bin/\\\\n2025-11-25T12:14:39Z [verbose] multus-daemon started\\\\n2025-11-25T12:14:39Z [verbose] Readiness Indicator file check\\\\n2025-11-25T12:15:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.361094 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.372432 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.388577 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.399203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.399240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.399250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.399265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.399274 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.405233 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.426050 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.438283 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.452662 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.482436 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:37Z\\\",\\\"message\\\":\\\" 6750 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108300 6750 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108374 6750 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108912 6750 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.110203 6750 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 12:15:37.110279 6750 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 12:15:37.111466 6750 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 12:15:37.111551 6750 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 12:15:37.111634 6750 factory.go:656] Stopping watch factory\\\\nI1125 12:15:37.118820 6750 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 12:15:37.118866 6750 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 12:15:37.118940 6750 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:37.118983 6750 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:15:37.119069 6750 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.501496 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.501741 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.501864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.501954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.502024 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.503324 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.518977 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.536121 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.548855 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.561871 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:39Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.604561 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.604964 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.605319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.605495 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.605717 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.709222 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.709283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.709300 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.709322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.709338 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.739763 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:39 crc kubenswrapper[4688]: E1125 12:15:39.740240 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.812941 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.813018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.813031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.813056 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.813076 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.916176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.916232 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.916279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.916303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:39 crc kubenswrapper[4688]: I1125 12:15:39.916319 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:39Z","lastTransitionTime":"2025-11-25T12:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.020465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.020967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.021211 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.021375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.021551 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.124211 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.124264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.124281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.124304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.124323 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.227229 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.227299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.227318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.227341 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.227358 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.330153 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.330219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.330237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.330261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.330328 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.432866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.432904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.432937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.432952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.432962 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.535925 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.535981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.535997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.536022 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.536077 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.585448 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.585566 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.585591 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.585620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.585642 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.607933 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.615004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.615064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.615083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.615110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.615128 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.632851 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.636104 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.636143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.636153 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.636170 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.636180 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.653294 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.657098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.657132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.657143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.657158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.657168 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.671291 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.675443 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.675482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.675490 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.675503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.675514 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.695340 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.695541 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.697641 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.697667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.697675 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.697687 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.697698 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.739473 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.739605 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.739672 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.739800 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.739868 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:40 crc kubenswrapper[4688]: E1125 12:15:40.739967 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.760152 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.787183 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.801606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.801842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.801864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.801921 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.801938 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.812775 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.828547 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.848647 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:37Z\\\",\\\"message\\\":\\\" 6750 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108300 6750 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108374 6750 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108912 6750 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.110203 6750 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 12:15:37.110279 6750 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 12:15:37.111466 6750 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 12:15:37.111551 6750 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 12:15:37.111634 6750 factory.go:656] Stopping watch factory\\\\nI1125 12:15:37.118820 6750 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 12:15:37.118866 6750 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 12:15:37.118940 6750 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:37.118983 6750 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:15:37.119069 6750 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.866177 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.879901 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.894075 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.904619 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.904658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.904669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.904687 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.904698 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:40Z","lastTransitionTime":"2025-11-25T12:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.905111 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.918771 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.947686 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.964728 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:40 crc kubenswrapper[4688]: I1125 12:15:40.978606 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.000334 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:40Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.007826 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.008189 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.008308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.008403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.008491 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.015895 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:24Z\\\",\\\"message\\\":\\\"2025-11-25T12:14:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e\\\\n2025-11-25T12:14:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e to /host/opt/cni/bin/\\\\n2025-11-25T12:14:39Z [verbose] multus-daemon started\\\\n2025-11-25T12:14:39Z [verbose] Readiness Indicator file check\\\\n2025-11-25T12:15:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.034289 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.051552 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.070365 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:41Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.110842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.110895 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.110916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.110939 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.110956 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.213750 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.213814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.213852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.213885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.213911 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.317241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.317317 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.317341 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.317377 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.317402 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.419763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.419816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.419825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.419838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.419848 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.522820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.522911 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.522930 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.522956 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.522972 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.626311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.626670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.626773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.626881 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.626964 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.730676 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.730751 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.730767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.730794 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.730812 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.739173 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:41 crc kubenswrapper[4688]: E1125 12:15:41.739359 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.834070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.834139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.834155 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.834172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.834185 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.937637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.937702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.937719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.937744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:41 crc kubenswrapper[4688]: I1125 12:15:41.937763 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:41Z","lastTransitionTime":"2025-11-25T12:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.040899 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.040971 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.040989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.041014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.041035 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.145132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.145181 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.145194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.145210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.145225 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.248402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.248462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.248479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.248504 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.248556 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.351757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.351838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.351848 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.351865 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.351875 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.454340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.454393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.454404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.454423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.454432 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.557238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.557684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.557862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.558062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.558257 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.661756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.662023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.662123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.662225 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.662328 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.739848 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.739970 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:42 crc kubenswrapper[4688]: E1125 12:15:42.740132 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:42 crc kubenswrapper[4688]: E1125 12:15:42.740434 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.740696 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:42 crc kubenswrapper[4688]: E1125 12:15:42.741087 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.765565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.765613 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.765630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.765654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.765670 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.869400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.869462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.869474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.869836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.869866 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.973506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.973567 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.973579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.973596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:42 crc kubenswrapper[4688]: I1125 12:15:42.973607 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:42Z","lastTransitionTime":"2025-11-25T12:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.076674 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.077013 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.077257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.077485 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.077788 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.181205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.181666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.181902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.182132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.182347 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.285774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.285845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.285863 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.285887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.285906 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.388279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.388319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.388328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.388342 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.388351 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.491215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.491294 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.491319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.491350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.491368 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.595212 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.595286 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.595299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.595324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.595340 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.698592 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.698653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.698672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.698696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.698713 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.739695 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:43 crc kubenswrapper[4688]: E1125 12:15:43.740101 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.801788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.801884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.801902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.801938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.801964 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.904684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.904746 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.904764 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.904790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:43 crc kubenswrapper[4688]: I1125 12:15:43.904809 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:43Z","lastTransitionTime":"2025-11-25T12:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.007800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.007888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.007923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.007957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.007982 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.111070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.111117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.111132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.111152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.111170 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.214056 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.214098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.214110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.214125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.214136 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.316683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.316759 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.316775 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.316798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.316813 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.419928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.420005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.420017 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.420070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.420084 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.522897 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.522952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.522967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.522995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.523012 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.625389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.625449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.625464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.625486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.625499 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.730606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.730690 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.730710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.730740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.730761 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.739166 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.739227 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.739292 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:44 crc kubenswrapper[4688]: E1125 12:15:44.739419 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:44 crc kubenswrapper[4688]: E1125 12:15:44.739602 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:44 crc kubenswrapper[4688]: E1125 12:15:44.740213 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.833572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.833841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.833962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.834048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.834126 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.938304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.938362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.938380 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.938405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:44 crc kubenswrapper[4688]: I1125 12:15:44.938436 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:44Z","lastTransitionTime":"2025-11-25T12:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.041926 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.042326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.042492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.042700 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.043329 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.146650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.146900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.147029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.147125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.147220 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.250548 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.250802 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.250867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.250959 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.251053 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.354543 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.354872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.354975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.355062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.355148 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.458039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.458093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.458105 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.458124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.458137 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.560646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.560679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.560687 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.560701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.560709 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.663664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.663947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.664084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.664204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.664333 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.739202 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:45 crc kubenswrapper[4688]: E1125 12:15:45.739604 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.752365 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.766923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.767842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.767877 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.767903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.767926 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.871261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.871329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.871348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.871376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.871394 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.974789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.974849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.974866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.974892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:45 crc kubenswrapper[4688]: I1125 12:15:45.974909 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:45Z","lastTransitionTime":"2025-11-25T12:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.078364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.078418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.078429 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.078445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.078457 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.181007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.181079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.181098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.181117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.181131 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.283876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.284399 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.284420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.284441 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.284453 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.388489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.388579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.388633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.388661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.388678 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.491251 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.491314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.491331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.491358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.491375 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.593732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.593803 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.593827 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.593858 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.593883 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.696868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.696915 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.696926 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.696944 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.696958 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.739828 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.739903 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:46 crc kubenswrapper[4688]: E1125 12:15:46.740039 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.739847 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:46 crc kubenswrapper[4688]: E1125 12:15:46.740150 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:46 crc kubenswrapper[4688]: E1125 12:15:46.740246 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.799326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.799362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.799373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.799388 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.799399 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.902619 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.902686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.902704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.902730 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:46 crc kubenswrapper[4688]: I1125 12:15:46.902747 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:46Z","lastTransitionTime":"2025-11-25T12:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.005270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.005315 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.005324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.005340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.005349 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.108642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.108672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.108686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.108702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.108715 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.211010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.211092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.211102 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.211117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.211146 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.314141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.314196 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.314215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.314238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.314256 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.416922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.416968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.416981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.416998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.417013 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.519400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.519457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.519474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.519497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.519514 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.622585 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.622623 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.622649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.622664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.622674 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.725195 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.725226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.725234 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.725247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.725257 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.739858 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:47 crc kubenswrapper[4688]: E1125 12:15:47.740046 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.827842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.827898 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.827910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.827928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.827941 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.930203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.930284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.930297 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.930314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:47 crc kubenswrapper[4688]: I1125 12:15:47.930327 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:47Z","lastTransitionTime":"2025-11-25T12:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.033343 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.033396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.033412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.033432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.033446 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.136244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.136291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.136301 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.136316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.136325 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.239238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.239407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.239489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.239570 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.239603 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.341846 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.342210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.342459 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.342707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.342870 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.446712 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.447180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.447394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.447661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.447952 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.551584 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.551662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.551681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.551709 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.551726 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.655192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.655257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.655269 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.655291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.655305 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.739675 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.740106 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.740355 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:48 crc kubenswrapper[4688]: E1125 12:15:48.740494 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:48 crc kubenswrapper[4688]: E1125 12:15:48.740359 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:48 crc kubenswrapper[4688]: E1125 12:15:48.741001 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.758196 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.758245 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.758263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.758284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.758301 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.861952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.862093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.862118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.862147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.862173 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.966725 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.966807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.966829 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.966857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:48 crc kubenswrapper[4688]: I1125 12:15:48.966880 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:48Z","lastTransitionTime":"2025-11-25T12:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.069806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.069873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.069892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.069917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.069935 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.173402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.173468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.173489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.173546 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.173567 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.276079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.276122 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.276133 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.276149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.276161 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.379766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.379843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.379861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.379885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.379903 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.482920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.483285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.483377 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.483477 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.483600 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.586581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.586643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.586653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.586674 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.586692 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.689710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.689779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.689797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.689828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.689847 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.739858 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:49 crc kubenswrapper[4688]: E1125 12:15:49.740074 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.792173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.792239 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.792257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.792280 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.792298 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.895598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.895686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.895716 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.895743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.895760 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.998727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.998786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.998804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.998826 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:49 crc kubenswrapper[4688]: I1125 12:15:49.998842 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:49Z","lastTransitionTime":"2025-11-25T12:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.102015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.102074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.102092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.102114 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.102132 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.205227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.205283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.205304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.205330 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.205351 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.309201 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.309593 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.309728 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.309872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.309997 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.413255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.413829 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.413936 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.414061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.414234 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.517278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.517355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.517379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.517410 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.517433 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.619952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.620117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.620182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.620210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.620229 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.723319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.723384 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.723402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.723426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.723443 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.739844 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.739984 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.739990 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.740394 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.740467 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.740703 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.758011 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0399ec58-935d-44d2-8687-88c572bc636f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d90d361523c651a0699e011300fd694fa1a8ca06f78acd6f621984f0ef490584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84608c2f45eb7b5753f5db1e3f63ff8bd4a373d2df6bb2d481f597755782d666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcznx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-59snw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.772304 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.792098 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xlfw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c3971fa-9838-436e-97b1-be050abea83a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:24Z\\\",\\\"message\\\":\\\"2025-11-25T12:14:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e\\\\n2025-11-25T12:14:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_76d6d9db-cd40-4ce5-9cdc-855fd72ee27e to /host/opt/cni/bin/\\\\n2025-11-25T12:14:39Z [verbose] multus-daemon started\\\\n2025-11-25T12:14:39Z [verbose] Readiness Indicator file check\\\\n2025-11-25T12:15:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r989l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xlfw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.812288 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2f03eab-5a08-4ebf-8a2e-0871c9fcee61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e55bfde9c8ec35602c59c4debf66abbe035a7400c88a3a0a49fd5d2f20f39a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4db184145d60b21638e21af7adfb177e23c7742a7978b19c039cf5e2c1d42852\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9bbfe37ea1eef494b54651d035e282696db0284a23c6813db682fdb7810593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8868038bae0c847d480c6f27277e8a6f82e9661a76a2ad2bdf56b4b10d2f675\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a28d912772895620990e5a58fa8b3f6f9ee6c57715bc27b3812e75fd73cef60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e788c16ce95e21ea34dd6b761297dc094fce0ba8664295f4c80d704d812992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdd5adcf0658e45af02dc16ec1a65bc0cdc63be7f5c8f4d989637d61aaa48cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wt7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhtg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.826271 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.826343 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.826367 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.826395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.826418 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.827219 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kxr8q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4c79c4-3997-43ef-9c05-4fe44ff31141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290a201a47877359e16b60bc6ae8f068700575d7ad32db7ca9a76ad1367a8e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5lzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kxr8q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.854727 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T12:15:37Z\\\",\\\"message\\\":\\\" 6750 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108300 6750 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108374 6750 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.108912 6750 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 12:15:37.110203 6750 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 12:15:37.110279 6750 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 12:15:37.111466 6750 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 12:15:37.111551 6750 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 12:15:37.111634 6750 factory.go:656] Stopping watch factory\\\\nI1125 12:15:37.118820 6750 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 12:15:37.118866 6750 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 12:15:37.118940 6750 ovnkube.go:599] Stopped ovnkube\\\\nI1125 12:15:37.118983 6750 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 12:15:37.119069 6750 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z95pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-csgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.869878 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4b44e7-3f28-4c4a-80e8-69e8a08c58d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d981b42bdd69bc889a9dd14ac5ca627b1e2a7a7c207090cd3bdbcf62552c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a49ec5a7c8c631a1340721b75bc42aced85f7d683c814b62e2599e9339d8235\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a471d558fdb4d89622249f180a836c918480f28b00367dd9d4469f0540dab58d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.892408 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.894378 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.894402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.894412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.894427 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.894439 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.907940 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.912071 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.916171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.916263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.916281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.916307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.916326 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.919662 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd8b76-41b5-4979-be54-9c7441c21aca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://134ef40ac2413d35339690434dfca6eb1221ec92cad6520058ce7359c6e6352c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x428r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6pql6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.936275 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cvktt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dceabd45-6147-4016-b8ba-5d3dac35df54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3686918133c5dffbfde495ce07ab51591d9e6699c8997b97b3ba43d641684e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flhjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cvktt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.936488 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.941605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.941670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.941684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.941704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.941719 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.948576 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbqw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.955380 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.957709 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6346df8f-8523-4aa8-9bca-57cb98abf657\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7eb78509ebf87118b254c047fbe3d4c62ba269a7a3eadcf330a2015c709ebb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2e8395680eb84a8cfb1d38a48d794b8e8f6b26869f0a026cb4d9c5a1cb2cf69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2e8395680eb84a8cfb1d38a48d794b8e8f6b26869f0a026cb4d9c5a1cb2cf69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.960039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.960091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.960109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.960128 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.960142 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.971204 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.973622 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd8d493b-63c6-4e36-9c7a-58f2ed57f378\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T12:14:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 12:14:31.749121 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 12:14:31.749339 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 12:14:31.750233 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1959170740/tls.crt::/tmp/serving-cert-1959170740/tls.key\\\\\\\"\\\\nI1125 12:14:31.990949 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 12:14:31.996314 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 12:14:31.996336 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 12:14:31.996356 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 12:14:31.996361 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 12:14:31.999809 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 12:14:31.999833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 12:14:31.999844 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 12:14:31.999848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 12:14:31.999852 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 12:14:31.999855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 12:14:31.999854 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 12:14:32.001817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.976682 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.976734 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.976752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.976778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.976796 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.987273 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24c5df92-f9eb-4d47-be22-4a403d57df3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bf935611e16f0ffe5499b081f0d03c02d845710a07aaab961528930e6473f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19668c991792d85ca188f55ebd549bbcb90e8c4e4d631571ff54d3b951c0025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddcded92dbc2e834134bb3b7bfec3a618f6ef0aeb3ecb3ed44b3cb2ba2c960d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a012f10ba56e3fa7ae3cba12f1a42a96c917b28a54eb5f2ad66d4c479f04a5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.991337 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T12:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffd64935-041b-437f-a9ed-f5d9731454d0\\\",\\\"systemUUID\\\":\\\"f59bf0f2-499e-4429-a2b3-013f2c9a413d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:50Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:50 crc kubenswrapper[4688]: E1125 12:15:50.991513 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.993594 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.993628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.993637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.993651 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:50 crc kubenswrapper[4688]: I1125 12:15:50.993662 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:50Z","lastTransitionTime":"2025-11-25T12:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.006653 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a23170cb321000f051c5956e87b288a8bfe3b6b0f58da48b11d521a3455346bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.031307 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1365f146-7052-4af2-8d20-5f879cb667ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f99ffc0088fe5f3e17198ca60c3eaf9f1ca2013fdbdc38745517ab50439df1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb9b7cc74f36e109b579adef2cf6af6d8b1e5abb9962a5e36c9fc305cadd4727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e3acbfa8ec66574f876ebc30cebcfe87dfbfc2d70f36b24b3ec3d734fdbfb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9201411aaf996ce8d4a83b3ce9f5f8fb35bda17f92226d214afb02f4bf9a14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://585d137afbc75556dbbbf540e38398632f39d3be0181aa5c501c546ba1223173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ab42f5480cc9bfc95fe04613f2b815a5d2aba0ccf3141c784f6a3026da18a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c295ed99e5f8874fa99df70042bba1562869376c74d97c1e20bbf708dbc7f1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cde45cc75a2035066750753291bfaafdbe7d8b8413ee162321886091c9652229\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T12:14:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T12:14:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T12:14:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.047569 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6967e12527d5ada79a2ca98313d4cf99e95bd6fddf19718c693a00851d6ef066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227a9aec71db1e8eea241a9a40025073224a2371a7ac5f02396461c76e4cc5f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.066183 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T12:14:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9712db4de3c87c6efda4238eba8fac4ec102eaeab8d44c247bcbd40fb0e95ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T12:14:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T12:15:51Z is after 2025-08-24T17:21:41Z" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.097131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.097191 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.097217 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.097249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.097273 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.200047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.200085 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.200096 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.200111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.200121 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.303043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.303094 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.303107 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.303126 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.303142 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.406991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.407413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.407652 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.407805 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.407972 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.510985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.511073 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.511094 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.511115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.511165 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.614802 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.615118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.615303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.615456 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.615632 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.719172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.719235 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.719257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.719287 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.719307 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.739211 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:51 crc kubenswrapper[4688]: E1125 12:15:51.739505 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.822672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.822728 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.822745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.822768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.822786 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.926202 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.926315 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.926337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.926360 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:51 crc kubenswrapper[4688]: I1125 12:15:51.926376 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:51Z","lastTransitionTime":"2025-11-25T12:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.028992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.029050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.029068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.029092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.029108 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.132799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.132877 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.132894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.132913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.132932 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.237227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.238232 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.238403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.238616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.238773 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.342471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.342604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.342638 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.342665 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.342686 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.445882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.445946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.445967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.445991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.446008 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.549636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.549701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.549720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.549749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.549771 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.652566 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.652623 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.652641 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.652664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.652683 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.739989 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.740056 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:52 crc kubenswrapper[4688]: E1125 12:15:52.740261 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.740427 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:52 crc kubenswrapper[4688]: E1125 12:15:52.741075 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.741563 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:15:52 crc kubenswrapper[4688]: E1125 12:15:52.741637 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:52 crc kubenswrapper[4688]: E1125 12:15:52.741830 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.755878 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.756175 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.756361 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.756502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.756757 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.860189 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.860246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.860263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.860285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.860302 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.963372 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.963647 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.963734 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.963857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:52 crc kubenswrapper[4688]: I1125 12:15:52.963958 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:52Z","lastTransitionTime":"2025-11-25T12:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.067062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.067123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.067140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.067168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.067191 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.169876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.170109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.170170 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.170235 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.170292 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.273028 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.273102 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.273124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.273149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.273168 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.376558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.376628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.376645 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.376671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.376690 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.480162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.480222 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.480249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.480267 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.480280 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.583432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.583510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.583565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.583596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.583614 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.686670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.686715 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.686726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.686744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.686755 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.738854 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:53 crc kubenswrapper[4688]: E1125 12:15:53.739066 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.789997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.790062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.790079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.790106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.790127 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.893625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.893690 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.893708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.893731 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.893752 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.996649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.996698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.996715 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.996738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:53 crc kubenswrapper[4688]: I1125 12:15:53.996756 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:53Z","lastTransitionTime":"2025-11-25T12:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.100116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.100200 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.100224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.100255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.100277 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.203360 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.203434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.203468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.203487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.203500 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.307823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.307881 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.307898 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.307924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.307945 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.411355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.411458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.411498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.411543 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.411562 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.514331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.514368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.514379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.514395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.514405 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.617336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.617406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.617424 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.617450 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.617466 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.721053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.721155 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.721171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.721190 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.721204 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.739797 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.739832 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:54 crc kubenswrapper[4688]: E1125 12:15:54.739922 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.739934 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:54 crc kubenswrapper[4688]: E1125 12:15:54.740062 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:54 crc kubenswrapper[4688]: E1125 12:15:54.740118 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.824157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.824217 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.824236 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.824261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.824280 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.926805 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.926857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.926871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.926888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:54 crc kubenswrapper[4688]: I1125 12:15:54.926901 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:54Z","lastTransitionTime":"2025-11-25T12:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.029077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.029119 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.029135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.029154 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.029170 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.131869 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.131954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.131998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.132015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.132027 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.235962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.236019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.236035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.236057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.236073 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.339295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.339383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.339398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.339417 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.339433 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.385120 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:55 crc kubenswrapper[4688]: E1125 12:15:55.385320 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:15:55 crc kubenswrapper[4688]: E1125 12:15:55.385390 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs podName:45273ea2-4a52-4191-a40a-4b4d3b1a12dd nodeName:}" failed. No retries permitted until 2025-11-25 12:16:59.385371012 +0000 UTC m=+169.494999880 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs") pod "network-metrics-daemon-xbqw8" (UID: "45273ea2-4a52-4191-a40a-4b4d3b1a12dd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.443809 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.443896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.443917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.443950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.443975 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.546924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.547067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.547088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.547121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.547141 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.651163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.651851 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.651882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.651920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.651941 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.739963 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:55 crc kubenswrapper[4688]: E1125 12:15:55.740444 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.754880 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.754951 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.754971 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.754996 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.755015 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.857773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.857901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.857926 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.857962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.857989 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.961436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.961508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.961555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.961582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:55 crc kubenswrapper[4688]: I1125 12:15:55.961599 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:55Z","lastTransitionTime":"2025-11-25T12:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.065086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.065160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.065184 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.065214 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.065235 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.167141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.167252 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.167263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.167296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.167306 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.271949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.272015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.272034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.272074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.272092 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.375568 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.375646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.375672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.375706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.375729 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.479211 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.479299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.479322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.479353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.479371 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.582615 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.582703 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.582797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.582830 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.582851 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.686138 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.686250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.686271 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.686294 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.686311 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.739414 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.739464 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.739481 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:56 crc kubenswrapper[4688]: E1125 12:15:56.739606 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:56 crc kubenswrapper[4688]: E1125 12:15:56.739696 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:56 crc kubenswrapper[4688]: E1125 12:15:56.739842 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.792919 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.793026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.793048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.793075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.793101 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.896888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.896976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.896999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.897028 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:56 crc kubenswrapper[4688]: I1125 12:15:56.897051 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:56Z","lastTransitionTime":"2025-11-25T12:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.000352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.000475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.000502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.000558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.000583 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.103452 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.103486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.103511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.103539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.103548 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.206242 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.206291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.206302 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.206320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.206332 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.308848 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.308910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.308922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.308941 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.308956 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.411593 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.411663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.411678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.411703 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.411719 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.514110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.514175 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.514192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.514218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.514236 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.617143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.617190 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.617207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.617231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.617248 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.720206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.720241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.720254 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.720269 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.720280 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.739025 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:57 crc kubenswrapper[4688]: E1125 12:15:57.739205 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.823570 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.823626 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.823642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.823666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.823684 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.926696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.926741 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.926754 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.926770 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:57 crc kubenswrapper[4688]: I1125 12:15:57.926782 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:57Z","lastTransitionTime":"2025-11-25T12:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.029494 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.029588 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.029606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.029634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.029692 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.131575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.131628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.131644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.131666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.131683 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.235143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.235211 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.235228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.235254 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.235271 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.337972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.338028 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.338043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.338065 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.338079 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.441095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.441162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.441184 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.441213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.441236 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.545069 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.545165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.545192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.545226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.545286 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.647973 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.648039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.648060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.648085 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.648103 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.739205 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.739294 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.739206 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:15:58 crc kubenswrapper[4688]: E1125 12:15:58.739448 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:15:58 crc kubenswrapper[4688]: E1125 12:15:58.739639 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:15:58 crc kubenswrapper[4688]: E1125 12:15:58.739773 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.750656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.750712 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.750724 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.750745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.750757 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.853299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.853335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.853365 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.853382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.853391 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.955921 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.955984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.956000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.956022 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:58 crc kubenswrapper[4688]: I1125 12:15:58.956038 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:58Z","lastTransitionTime":"2025-11-25T12:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.059316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.059426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.059465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.059499 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.059519 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.163285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.163322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.163335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.163352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.163363 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.266493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.266597 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.266657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.266688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.266711 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.368992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.369046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.369059 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.369076 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.369087 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.472644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.472701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.472718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.472749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.472766 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.576343 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.576410 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.576424 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.576447 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.576463 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.679879 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.679942 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.679960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.679984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.680004 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.739896 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:15:59 crc kubenswrapper[4688]: E1125 12:15:59.740217 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.783979 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.784044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.784066 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.784099 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.784127 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.888726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.888822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.888848 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.888881 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.888903 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.993497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.993628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.993654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.993689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:15:59 crc kubenswrapper[4688]: I1125 12:15:59.993714 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:15:59Z","lastTransitionTime":"2025-11-25T12:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.098248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.098339 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.098363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.098397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.098419 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.202706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.202785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.202830 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.202864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.202885 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.305756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.305830 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.305855 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.306004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.306154 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.410384 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.410435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.410454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.410480 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.410498 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.513987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.514061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.514081 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.514111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.514133 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.617914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.618003 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.618029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.618060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.618082 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.722593 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.722683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.722712 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.722747 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.722766 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.739679 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:00 crc kubenswrapper[4688]: E1125 12:16:00.740346 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.740410 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:00 crc kubenswrapper[4688]: E1125 12:16:00.740639 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.741635 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:00 crc kubenswrapper[4688]: E1125 12:16:00.741828 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.809317 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.809287649 podStartE2EDuration="1m26.809287649s" podCreationTimestamp="2025-11-25 12:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:00.784905718 +0000 UTC m=+110.894534586" watchObservedRunningTime="2025-11-25 12:16:00.809287649 +0000 UTC m=+110.918916547" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.827044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.827138 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.827160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.827193 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.827219 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.855719 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podStartSLOduration=84.855693884 podStartE2EDuration="1m24.855693884s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:00.854567933 +0000 UTC m=+110.964196861" watchObservedRunningTime="2025-11-25 12:16:00.855693884 +0000 UTC m=+110.965322772" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.922669 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.922649676 podStartE2EDuration="15.922649676s" podCreationTimestamp="2025-11-25 12:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:00.922151031 +0000 UTC m=+111.031779899" watchObservedRunningTime="2025-11-25 12:16:00.922649676 +0000 UTC m=+111.032278544" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.929580 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.929616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.929628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.929662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.929674 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:00Z","lastTransitionTime":"2025-11-25T12:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.948153 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.948134835 podStartE2EDuration="1m28.948134835s" podCreationTimestamp="2025-11-25 12:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:00.948038742 +0000 UTC m=+111.057667650" watchObservedRunningTime="2025-11-25 12:16:00.948134835 +0000 UTC m=+111.057763723" Nov 25 12:16:00 crc kubenswrapper[4688]: I1125 12:16:00.965786 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.965764701 podStartE2EDuration="1m0.965764701s" podCreationTimestamp="2025-11-25 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:00.965136055 +0000 UTC m=+111.074764933" watchObservedRunningTime="2025-11-25 12:16:00.965764701 +0000 UTC m=+111.075393579" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.032720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.032770 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.032783 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.032803 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.032816 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:01Z","lastTransitionTime":"2025-11-25T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.054285 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cvktt" podStartSLOduration=85.054260656 podStartE2EDuration="1m25.054260656s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:01.003973636 +0000 UTC m=+111.113602514" watchObservedRunningTime="2025-11-25 12:16:01.054260656 +0000 UTC m=+111.163889534" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.058235 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=87.058208123 podStartE2EDuration="1m27.058208123s" podCreationTimestamp="2025-11-25 12:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:01.050466873 +0000 UTC m=+111.160095741" watchObservedRunningTime="2025-11-25 12:16:01.058208123 +0000 UTC m=+111.167836991" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.117657 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xlfw5" podStartSLOduration=85.1176368 podStartE2EDuration="1m25.1176368s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:01.11612511 +0000 UTC m=+111.225753968" watchObservedRunningTime="2025-11-25 12:16:01.1176368 +0000 UTC m=+111.227265668" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.135136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.135182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.135197 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.135215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.135230 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:01Z","lastTransitionTime":"2025-11-25T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.140042 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gnhtg" podStartSLOduration=85.140024726 podStartE2EDuration="1m25.140024726s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:01.13941886 +0000 UTC m=+111.249047728" watchObservedRunningTime="2025-11-25 12:16:01.140024726 +0000 UTC m=+111.249653614" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.155405 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kxr8q" podStartSLOduration=85.155369341 podStartE2EDuration="1m25.155369341s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:01.15461276 +0000 UTC m=+111.264241638" watchObservedRunningTime="2025-11-25 12:16:01.155369341 +0000 UTC m=+111.264998249" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.175657 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-59snw" podStartSLOduration=84.175636499 podStartE2EDuration="1m24.175636499s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:01.174419527 +0000 UTC m=+111.284048435" watchObservedRunningTime="2025-11-25 12:16:01.175636499 +0000 UTC m=+111.285265377" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.188575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.188668 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.188693 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.188727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.188750 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T12:16:01Z","lastTransitionTime":"2025-11-25T12:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.238754 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8"] Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.239137 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.242624 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.242692 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.245135 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.246055 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.349481 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/330947e1-6a6d-458c-932b-900ad20de5e4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.349533 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/330947e1-6a6d-458c-932b-900ad20de5e4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.349569 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/330947e1-6a6d-458c-932b-900ad20de5e4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.349607 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330947e1-6a6d-458c-932b-900ad20de5e4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.349624 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/330947e1-6a6d-458c-932b-900ad20de5e4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.450820 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/330947e1-6a6d-458c-932b-900ad20de5e4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.450907 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/330947e1-6a6d-458c-932b-900ad20de5e4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.450957 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/330947e1-6a6d-458c-932b-900ad20de5e4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.450978 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/330947e1-6a6d-458c-932b-900ad20de5e4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.451007 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330947e1-6a6d-458c-932b-900ad20de5e4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.451127 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/330947e1-6a6d-458c-932b-900ad20de5e4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.451154 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/330947e1-6a6d-458c-932b-900ad20de5e4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.452654 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/330947e1-6a6d-458c-932b-900ad20de5e4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.465484 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330947e1-6a6d-458c-932b-900ad20de5e4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.479300 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/330947e1-6a6d-458c-932b-900ad20de5e4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9j6c8\" (UID: \"330947e1-6a6d-458c-932b-900ad20de5e4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.562498 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" Nov 25 12:16:01 crc kubenswrapper[4688]: I1125 12:16:01.739045 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:01 crc kubenswrapper[4688]: E1125 12:16:01.739507 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:02 crc kubenswrapper[4688]: I1125 12:16:02.351648 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" event={"ID":"330947e1-6a6d-458c-932b-900ad20de5e4","Type":"ContainerStarted","Data":"18a6fe727a671eed5ee77479352cb192cb1032e2356e672aefb49ec2ec0fc66c"} Nov 25 12:16:02 crc kubenswrapper[4688]: I1125 12:16:02.351746 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" event={"ID":"330947e1-6a6d-458c-932b-900ad20de5e4","Type":"ContainerStarted","Data":"c063813593788f275b736994c2675b9bfe8863ef19a78a77439546aded6204a5"} Nov 25 12:16:02 crc kubenswrapper[4688]: I1125 12:16:02.372269 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9j6c8" podStartSLOduration=86.372247322 podStartE2EDuration="1m26.372247322s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:02.371097031 +0000 UTC m=+112.480725909" watchObservedRunningTime="2025-11-25 12:16:02.372247322 +0000 UTC m=+112.481876200" Nov 25 12:16:02 crc kubenswrapper[4688]: I1125 12:16:02.739392 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:02 crc kubenswrapper[4688]: E1125 12:16:02.739725 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:02 crc kubenswrapper[4688]: I1125 12:16:02.739963 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:02 crc kubenswrapper[4688]: I1125 12:16:02.740056 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:02 crc kubenswrapper[4688]: E1125 12:16:02.740171 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:02 crc kubenswrapper[4688]: E1125 12:16:02.740317 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:03 crc kubenswrapper[4688]: I1125 12:16:03.739146 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:03 crc kubenswrapper[4688]: E1125 12:16:03.739280 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:04 crc kubenswrapper[4688]: I1125 12:16:04.739889 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:04 crc kubenswrapper[4688]: I1125 12:16:04.739991 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:04 crc kubenswrapper[4688]: E1125 12:16:04.740072 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:04 crc kubenswrapper[4688]: E1125 12:16:04.740213 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:04 crc kubenswrapper[4688]: I1125 12:16:04.740471 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:04 crc kubenswrapper[4688]: E1125 12:16:04.740643 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:05 crc kubenswrapper[4688]: I1125 12:16:05.738766 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:05 crc kubenswrapper[4688]: E1125 12:16:05.738902 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:06 crc kubenswrapper[4688]: I1125 12:16:06.739478 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:06 crc kubenswrapper[4688]: I1125 12:16:06.739512 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:06 crc kubenswrapper[4688]: I1125 12:16:06.739469 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:06 crc kubenswrapper[4688]: E1125 12:16:06.739614 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:06 crc kubenswrapper[4688]: E1125 12:16:06.739784 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:06 crc kubenswrapper[4688]: E1125 12:16:06.740588 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:06 crc kubenswrapper[4688]: I1125 12:16:06.740983 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:16:06 crc kubenswrapper[4688]: E1125 12:16:06.741182 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-csgdv_openshift-ovn-kubernetes(c9bf79ce-8d9b-472b-93a8-8e4c779bfb62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" Nov 25 12:16:07 crc kubenswrapper[4688]: I1125 12:16:07.739842 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:07 crc kubenswrapper[4688]: E1125 12:16:07.740019 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:08 crc kubenswrapper[4688]: I1125 12:16:08.739159 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:08 crc kubenswrapper[4688]: I1125 12:16:08.739212 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:08 crc kubenswrapper[4688]: I1125 12:16:08.739251 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:08 crc kubenswrapper[4688]: E1125 12:16:08.739368 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:08 crc kubenswrapper[4688]: E1125 12:16:08.739455 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:08 crc kubenswrapper[4688]: E1125 12:16:08.739556 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:09 crc kubenswrapper[4688]: I1125 12:16:09.739485 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:09 crc kubenswrapper[4688]: E1125 12:16:09.739740 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:10 crc kubenswrapper[4688]: E1125 12:16:10.730195 4688 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 12:16:10 crc kubenswrapper[4688]: I1125 12:16:10.739724 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:10 crc kubenswrapper[4688]: I1125 12:16:10.739869 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:10 crc kubenswrapper[4688]: E1125 12:16:10.741048 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:10 crc kubenswrapper[4688]: I1125 12:16:10.741117 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:10 crc kubenswrapper[4688]: E1125 12:16:10.741291 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:10 crc kubenswrapper[4688]: E1125 12:16:10.741341 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:10 crc kubenswrapper[4688]: E1125 12:16:10.880147 4688 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 12:16:11 crc kubenswrapper[4688]: E1125 12:16:11.378917 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3971fa_9838_436e_97b1_be050abea83a.slice/crio-conmon-b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:16:11 crc kubenswrapper[4688]: I1125 12:16:11.385330 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/1.log" Nov 25 12:16:11 crc kubenswrapper[4688]: I1125 12:16:11.386753 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/0.log" Nov 25 12:16:11 crc kubenswrapper[4688]: I1125 12:16:11.386799 4688 generic.go:334] "Generic (PLEG): container finished" podID="6c3971fa-9838-436e-97b1-be050abea83a" containerID="b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37" exitCode=1 Nov 25 12:16:11 crc kubenswrapper[4688]: I1125 12:16:11.386838 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlfw5" event={"ID":"6c3971fa-9838-436e-97b1-be050abea83a","Type":"ContainerDied","Data":"b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37"} Nov 25 12:16:11 crc kubenswrapper[4688]: I1125 12:16:11.386878 4688 scope.go:117] "RemoveContainer" containerID="90dc3feb535d64ccb06f2d7d5c5c0f038b6dd4b698c10c4121118278b0c38d23" Nov 25 12:16:11 crc kubenswrapper[4688]: I1125 12:16:11.387625 4688 scope.go:117] "RemoveContainer" containerID="b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37" Nov 25 12:16:11 crc kubenswrapper[4688]: E1125 12:16:11.387953 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xlfw5_openshift-multus(6c3971fa-9838-436e-97b1-be050abea83a)\"" pod="openshift-multus/multus-xlfw5" podUID="6c3971fa-9838-436e-97b1-be050abea83a" Nov 25 12:16:11 crc kubenswrapper[4688]: I1125 12:16:11.739651 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:11 crc kubenswrapper[4688]: E1125 12:16:11.739780 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:12 crc kubenswrapper[4688]: I1125 12:16:12.393188 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/1.log" Nov 25 12:16:12 crc kubenswrapper[4688]: I1125 12:16:12.739092 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:12 crc kubenswrapper[4688]: E1125 12:16:12.739268 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:12 crc kubenswrapper[4688]: I1125 12:16:12.739570 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:12 crc kubenswrapper[4688]: E1125 12:16:12.739666 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:12 crc kubenswrapper[4688]: I1125 12:16:12.739840 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:12 crc kubenswrapper[4688]: E1125 12:16:12.739968 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:13 crc kubenswrapper[4688]: I1125 12:16:13.739173 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:13 crc kubenswrapper[4688]: E1125 12:16:13.739321 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:14 crc kubenswrapper[4688]: I1125 12:16:14.739379 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:14 crc kubenswrapper[4688]: E1125 12:16:14.739949 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:14 crc kubenswrapper[4688]: I1125 12:16:14.739612 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:14 crc kubenswrapper[4688]: E1125 12:16:14.740114 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:14 crc kubenswrapper[4688]: I1125 12:16:14.739509 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:14 crc kubenswrapper[4688]: E1125 12:16:14.740638 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:15 crc kubenswrapper[4688]: I1125 12:16:15.739278 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:15 crc kubenswrapper[4688]: E1125 12:16:15.739585 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:15 crc kubenswrapper[4688]: E1125 12:16:15.881973 4688 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 12:16:16 crc kubenswrapper[4688]: I1125 12:16:16.739169 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:16 crc kubenswrapper[4688]: I1125 12:16:16.739242 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:16 crc kubenswrapper[4688]: E1125 12:16:16.739299 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:16 crc kubenswrapper[4688]: E1125 12:16:16.739398 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:16 crc kubenswrapper[4688]: I1125 12:16:16.739426 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:16 crc kubenswrapper[4688]: E1125 12:16:16.739596 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:17 crc kubenswrapper[4688]: I1125 12:16:17.739217 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:17 crc kubenswrapper[4688]: E1125 12:16:17.739386 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:18 crc kubenswrapper[4688]: I1125 12:16:18.739169 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:18 crc kubenswrapper[4688]: I1125 12:16:18.739214 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:18 crc kubenswrapper[4688]: I1125 12:16:18.739256 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:18 crc kubenswrapper[4688]: E1125 12:16:18.739355 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:18 crc kubenswrapper[4688]: E1125 12:16:18.739478 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:18 crc kubenswrapper[4688]: E1125 12:16:18.739614 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:19 crc kubenswrapper[4688]: I1125 12:16:19.739780 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:19 crc kubenswrapper[4688]: E1125 12:16:19.739974 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:20 crc kubenswrapper[4688]: I1125 12:16:20.739371 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:20 crc kubenswrapper[4688]: I1125 12:16:20.739371 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:20 crc kubenswrapper[4688]: E1125 12:16:20.741875 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:20 crc kubenswrapper[4688]: I1125 12:16:20.742028 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:20 crc kubenswrapper[4688]: E1125 12:16:20.742172 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:20 crc kubenswrapper[4688]: E1125 12:16:20.742555 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:20 crc kubenswrapper[4688]: E1125 12:16:20.882969 4688 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 12:16:21 crc kubenswrapper[4688]: E1125 12:16:21.445316 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3971fa_9838_436e_97b1_be050abea83a.slice/crio-conmon-b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:16:21 crc kubenswrapper[4688]: I1125 12:16:21.739593 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:21 crc kubenswrapper[4688]: E1125 12:16:21.739850 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:21 crc kubenswrapper[4688]: I1125 12:16:21.740598 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.426988 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/3.log" Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.431248 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerStarted","Data":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.431802 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.460963 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podStartSLOduration=106.460932968 podStartE2EDuration="1m46.460932968s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:22.459804729 +0000 UTC m=+132.569433637" watchObservedRunningTime="2025-11-25 12:16:22.460932968 +0000 UTC m=+132.570561886" Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.666491 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xbqw8"] Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.666668 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:22 crc kubenswrapper[4688]: E1125 12:16:22.666805 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.739273 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.739324 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.739301 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:22 crc kubenswrapper[4688]: E1125 12:16:22.739448 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:22 crc kubenswrapper[4688]: I1125 12:16:22.739460 4688 scope.go:117] "RemoveContainer" containerID="b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37" Nov 25 12:16:22 crc kubenswrapper[4688]: E1125 12:16:22.739544 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:22 crc kubenswrapper[4688]: E1125 12:16:22.739601 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:23 crc kubenswrapper[4688]: I1125 12:16:23.437134 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/1.log" Nov 25 12:16:23 crc kubenswrapper[4688]: I1125 12:16:23.437229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlfw5" event={"ID":"6c3971fa-9838-436e-97b1-be050abea83a","Type":"ContainerStarted","Data":"a3e9c6a69286c30e5e1065345a9b07bc7c55dbdb934f75898c22e5a18d024119"} Nov 25 12:16:24 crc kubenswrapper[4688]: I1125 12:16:24.740761 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:24 crc kubenswrapper[4688]: I1125 12:16:24.740888 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:24 crc kubenswrapper[4688]: E1125 12:16:24.740916 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 12:16:24 crc kubenswrapper[4688]: I1125 12:16:24.740958 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:24 crc kubenswrapper[4688]: I1125 12:16:24.740996 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:24 crc kubenswrapper[4688]: E1125 12:16:24.741119 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 12:16:24 crc kubenswrapper[4688]: E1125 12:16:24.741403 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbqw8" podUID="45273ea2-4a52-4191-a40a-4b4d3b1a12dd" Nov 25 12:16:24 crc kubenswrapper[4688]: E1125 12:16:24.741499 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.739675 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.739769 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.739801 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.740065 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.743899 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.746000 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.746352 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.746489 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.746639 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 12:16:26 crc kubenswrapper[4688]: I1125 12:16:26.746809 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: E1125 12:16:31.490662 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3971fa_9838_436e_97b1_be050abea83a.slice/crio-conmon-b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.726093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.770097 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.770882 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.772514 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7zpmk"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.772986 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.773713 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.773727 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.778317 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.780714 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.780734 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6pbkn"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.781048 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.781814 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.782422 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.782830 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.782926 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.783000 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.783168 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.783745 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.783776 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.784017 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.799836 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.800813 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rpxm7"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.801712 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.802205 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8djqh"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.803417 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.804208 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.804747 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.805368 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.805938 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.806503 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.806716 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-shwnt"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.806965 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.807417 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.810546 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-d9t24"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.811315 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-d9t24" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.811697 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.811920 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kch26"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.812636 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.814541 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xf6xd"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.815105 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.815185 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.815700 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.818880 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.819570 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.821280 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.821489 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.821862 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.822146 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.822745 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.823339 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.824841 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.825193 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.825676 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.825860 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.825718 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.826172 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.826366 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.826561 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.828557 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.828825 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.828874 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.828828 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.829028 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.829203 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.829563 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.829714 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.829867 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.830131 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.830387 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.830610 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.830795 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.830916 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.830943 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831063 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831082 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831219 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831357 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831420 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831490 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831562 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831365 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831654 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831716 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831738 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831799 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.831875 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.832012 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.832032 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.835065 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.835417 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.835862 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.836105 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.836126 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fnx92"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.836501 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.836784 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.836812 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6pbkn"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.836942 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.837398 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.837790 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.837996 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.843538 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.843824 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.844388 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.844580 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.844657 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.844778 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.844827 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.845057 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.845154 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.845271 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.845370 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.846474 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.850128 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bbsld"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.854903 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.855583 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.855841 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.856047 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.856262 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.856355 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.857049 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.857209 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.857491 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.859116 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.860387 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.872358 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.873479 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.874888 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.875131 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.884674 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.885343 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.885565 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.886205 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.886768 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hzmsk"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.886786 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.887263 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.887874 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.891283 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.891415 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.892208 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.892631 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7zpmk"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.892733 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.894313 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900786 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/199bf3df-657c-4fec-99c8-00abf00d41c0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900824 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-encryption-config\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900849 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13b81bbb-d493-4b29-a99e-b4dc92d6100e-audit-dir\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900874 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/450e4026-47c3-4cb0-8c6b-3275bb2942d5-config\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900895 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900918 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c3d286-5003-4b44-81c6-220e491ba838-serving-cert\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900942 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900962 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a465324f-c710-4331-80e5-68b5c0559887-node-pullsecrets\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.900981 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-etcd-client\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-config\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901023 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199bf3df-657c-4fec-99c8-00abf00d41c0-config\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901044 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2rjc\" (UniqueName: \"kubernetes.io/projected/13b81bbb-d493-4b29-a99e-b4dc92d6100e-kube-api-access-q2rjc\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmc98\" (UniqueName: \"kubernetes.io/projected/0a4461f7-9f73-409f-b237-3e81429a370c-kube-api-access-qmc98\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901087 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-image-import-ca\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901121 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2l5d\" (UniqueName: \"kubernetes.io/projected/c6228036-a923-4c34-ab54-cd2d04d98a5f-kube-api-access-l2l5d\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901142 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4461f7-9f73-409f-b237-3e81429a370c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901161 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-serving-cert\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901184 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6wq\" (UniqueName: \"kubernetes.io/projected/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-kube-api-access-tr6wq\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901206 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9e976ef-06f6-4284-8fa7-38e61291b75d-machine-approver-tls\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901232 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-trusted-ca-bundle\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901260 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901293 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-config\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901318 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901338 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-encryption-config\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9e976ef-06f6-4284-8fa7-38e61291b75d-config\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901380 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9cw8\" (UniqueName: \"kubernetes.io/projected/99c26895-5a88-439e-83df-ddb6c9d1a1cb-kube-api-access-g9cw8\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901402 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snzk\" (UniqueName: \"kubernetes.io/projected/b9e976ef-06f6-4284-8fa7-38e61291b75d-kube-api-access-2snzk\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901423 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901442 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-oauth-config\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901462 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901481 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2199e3-75fc-44e3-93ab-205e84134ea3-serving-cert\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901512 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-service-ca\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901558 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99c26895-5a88-439e-83df-ddb6c9d1a1cb-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901581 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/450e4026-47c3-4cb0-8c6b-3275bb2942d5-trusted-ca\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901601 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901625 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901648 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-client-ca\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901669 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a465324f-c710-4331-80e5-68b5c0559887-audit-dir\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901692 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcmnh\" (UniqueName: \"kubernetes.io/projected/4888de7e-b0ae-4682-a404-545a9ba9cd82-kube-api-access-jcmnh\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6228036-a923-4c34-ab54-cd2d04d98a5f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901732 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901752 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99c26895-5a88-439e-83df-ddb6c9d1a1cb-trusted-ca\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901772 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-config\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901792 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a4461f7-9f73-409f-b237-3e81429a370c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901811 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-audit\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901832 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh6pq\" (UniqueName: \"kubernetes.io/projected/a465324f-c710-4331-80e5-68b5c0559887-kube-api-access-dh6pq\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901852 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901874 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-policies\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901897 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfhlk\" (UniqueName: \"kubernetes.io/projected/8c1c5541-855e-4672-9bfe-080fdd2a42f1-kube-api-access-xfhlk\") pod \"downloads-7954f5f757-d9t24\" (UID: \"8c1c5541-855e-4672-9bfe-080fdd2a42f1\") " pod="openshift-console/downloads-7954f5f757-d9t24" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901918 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-etcd-client\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901941 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6228036-a923-4c34-ab54-cd2d04d98a5f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901960 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-audit-policies\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.901980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-service-ca-bundle\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902001 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902024 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902044 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-dir\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902086 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7j6\" (UniqueName: \"kubernetes.io/projected/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-kube-api-access-wf7j6\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902105 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-config\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902135 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902156 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkfhq\" (UniqueName: \"kubernetes.io/projected/14c3d286-5003-4b44-81c6-220e491ba838-kube-api-access-gkfhq\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902178 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/71d8cd5b-9140-4311-b663-c7b1dad5bb60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j6l7l\" (UID: \"71d8cd5b-9140-4311-b663-c7b1dad5bb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902199 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-config\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902219 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99c26895-5a88-439e-83df-ddb6c9d1a1cb-metrics-tls\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902243 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902263 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-etcd-serving-ca\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902315 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f809b0-59ab-47d4-b7d2-081a78d471fd-serving-cert\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902338 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkdq2\" (UniqueName: \"kubernetes.io/projected/450e4026-47c3-4cb0-8c6b-3275bb2942d5-kube-api-access-xkdq2\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902358 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djs2n\" (UniqueName: \"kubernetes.io/projected/c41361de-7942-4b97-97d8-9fd467394b25-kube-api-access-djs2n\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902379 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fb2199e3-75fc-44e3-93ab-205e84134ea3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902403 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8g8z\" (UniqueName: \"kubernetes.io/projected/71d8cd5b-9140-4311-b663-c7b1dad5bb60-kube-api-access-k8g8z\") pod \"cluster-samples-operator-665b6dd947-j6l7l\" (UID: \"71d8cd5b-9140-4311-b663-c7b1dad5bb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902424 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/450e4026-47c3-4cb0-8c6b-3275bb2942d5-serving-cert\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902446 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902478 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpdq9\" (UniqueName: \"kubernetes.io/projected/199bf3df-657c-4fec-99c8-00abf00d41c0-kube-api-access-bpdq9\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902501 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9e976ef-06f6-4284-8fa7-38e61291b75d-auth-proxy-config\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902542 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902564 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2zjv\" (UniqueName: \"kubernetes.io/projected/fb2199e3-75fc-44e3-93ab-205e84134ea3-kube-api-access-z2zjv\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902587 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41361de-7942-4b97-97d8-9fd467394b25-serving-cert\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902608 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-serving-cert\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902629 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902648 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-serving-cert\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902669 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrlsp\" (UniqueName: \"kubernetes.io/projected/b0f809b0-59ab-47d4-b7d2-081a78d471fd-kube-api-access-wrlsp\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902699 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902731 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/199bf3df-657c-4fec-99c8-00abf00d41c0-images\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902751 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-client-ca\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.902771 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-oauth-serving-cert\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.907140 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.910259 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.915294 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.918542 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.919414 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9vnd"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.920331 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.920982 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.927458 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.927606 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.955023 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.957444 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.955724 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.958660 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.965347 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.970450 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.975942 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.977130 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.981406 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.982917 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.984670 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.991845 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.994220 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p5bsr"] Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.994732 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.995301 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:31 crc kubenswrapper[4688]: I1125 12:16:31.998956 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:31.999970 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.000390 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.000676 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.001077 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.001925 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.003678 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.003939 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.004903 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5gsx"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005334 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9e976ef-06f6-4284-8fa7-38e61291b75d-auth-proxy-config\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005376 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpdq9\" (UniqueName: \"kubernetes.io/projected/199bf3df-657c-4fec-99c8-00abf00d41c0-kube-api-access-bpdq9\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005406 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/097d7867-7dee-4a99-9441-10f7f9aa5a76-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005431 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005456 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2zjv\" (UniqueName: \"kubernetes.io/projected/fb2199e3-75fc-44e3-93ab-205e84134ea3-kube-api-access-z2zjv\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005479 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41361de-7942-4b97-97d8-9fd467394b25-serving-cert\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005490 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005502 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-serving-cert\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005539 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005558 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-serving-cert\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005581 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005599 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrlsp\" (UniqueName: \"kubernetes.io/projected/b0f809b0-59ab-47d4-b7d2-081a78d471fd-kube-api-access-wrlsp\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005615 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/199bf3df-657c-4fec-99c8-00abf00d41c0-images\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005629 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-client-ca\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005645 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-oauth-serving-cert\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005662 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/450e4026-47c3-4cb0-8c6b-3275bb2942d5-config\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005677 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005693 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/199bf3df-657c-4fec-99c8-00abf00d41c0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005709 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-encryption-config\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005725 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13b81bbb-d493-4b29-a99e-b4dc92d6100e-audit-dir\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c3d286-5003-4b44-81c6-220e491ba838-serving-cert\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005757 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005773 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a465324f-c710-4331-80e5-68b5c0559887-node-pullsecrets\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005787 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-etcd-client\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005798 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.005803 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-config\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006066 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199bf3df-657c-4fec-99c8-00abf00d41c0-config\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006102 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2rjc\" (UniqueName: \"kubernetes.io/projected/13b81bbb-d493-4b29-a99e-b4dc92d6100e-kube-api-access-q2rjc\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006140 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmc98\" (UniqueName: \"kubernetes.io/projected/0a4461f7-9f73-409f-b237-3e81429a370c-kube-api-access-qmc98\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006191 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-image-import-ca\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006226 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2l5d\" (UniqueName: \"kubernetes.io/projected/c6228036-a923-4c34-ab54-cd2d04d98a5f-kube-api-access-l2l5d\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006254 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4461f7-9f73-409f-b237-3e81429a370c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006278 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-serving-cert\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr6wq\" (UniqueName: \"kubernetes.io/projected/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-kube-api-access-tr6wq\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006336 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-stats-auth\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006385 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9e976ef-06f6-4284-8fa7-38e61291b75d-machine-approver-tls\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006416 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-trusted-ca-bundle\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006445 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006472 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9e976ef-06f6-4284-8fa7-38e61291b75d-config\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006498 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-config\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006544 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006575 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-encryption-config\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006607 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9cw8\" (UniqueName: \"kubernetes.io/projected/99c26895-5a88-439e-83df-ddb6c9d1a1cb-kube-api-access-g9cw8\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006636 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snzk\" (UniqueName: \"kubernetes.io/projected/b9e976ef-06f6-4284-8fa7-38e61291b75d-kube-api-access-2snzk\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006665 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006691 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-oauth-config\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006718 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006748 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2199e3-75fc-44e3-93ab-205e84134ea3-serving-cert\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006796 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqlg\" (UniqueName: \"kubernetes.io/projected/943de0dc-b19a-4411-afc4-9e7a82a771bf-kube-api-access-rfqlg\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006831 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-service-ca\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006867 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99c26895-5a88-439e-83df-ddb6c9d1a1cb-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006892 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097d7867-7dee-4a99-9441-10f7f9aa5a76-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-config\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006928 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/450e4026-47c3-4cb0-8c6b-3275bb2942d5-trusted-ca\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006953 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13b81bbb-d493-4b29-a99e-b4dc92d6100e-audit-dir\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006960 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6228036-a923-4c34-ab54-cd2d04d98a5f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.006993 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007023 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007053 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007083 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-client-ca\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007110 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a465324f-c710-4331-80e5-68b5c0559887-audit-dir\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007137 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcmnh\" (UniqueName: \"kubernetes.io/projected/4888de7e-b0ae-4682-a404-545a9ba9cd82-kube-api-access-jcmnh\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007169 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99c26895-5a88-439e-83df-ddb6c9d1a1cb-trusted-ca\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007196 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-config\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007225 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a4461f7-9f73-409f-b237-3e81429a370c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007254 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-audit\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007280 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh6pq\" (UniqueName: \"kubernetes.io/projected/a465324f-c710-4331-80e5-68b5c0559887-kube-api-access-dh6pq\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007308 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007369 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-policies\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007401 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfhlk\" (UniqueName: \"kubernetes.io/projected/8c1c5541-855e-4672-9bfe-080fdd2a42f1-kube-api-access-xfhlk\") pod \"downloads-7954f5f757-d9t24\" (UID: \"8c1c5541-855e-4672-9bfe-080fdd2a42f1\") " pod="openshift-console/downloads-7954f5f757-d9t24" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007428 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-etcd-client\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007458 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-metrics-certs\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007488 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-default-certificate\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007546 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6228036-a923-4c34-ab54-cd2d04d98a5f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007578 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-audit-policies\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007607 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-service-ca-bundle\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007640 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007670 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007698 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007738 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-dir\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007764 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7j6\" (UniqueName: \"kubernetes.io/projected/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-kube-api-access-wf7j6\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-config\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007817 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/71d8cd5b-9140-4311-b663-c7b1dad5bb60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j6l7l\" (UID: \"71d8cd5b-9140-4311-b663-c7b1dad5bb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007841 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007868 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkfhq\" (UniqueName: \"kubernetes.io/projected/14c3d286-5003-4b44-81c6-220e491ba838-kube-api-access-gkfhq\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007896 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-config\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007926 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99c26895-5a88-439e-83df-ddb6c9d1a1cb-metrics-tls\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.007978 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-etcd-serving-ca\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008005 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943de0dc-b19a-4411-afc4-9e7a82a771bf-service-ca-bundle\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008035 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8g8z\" (UniqueName: \"kubernetes.io/projected/71d8cd5b-9140-4311-b663-c7b1dad5bb60-kube-api-access-k8g8z\") pod \"cluster-samples-operator-665b6dd947-j6l7l\" (UID: \"71d8cd5b-9140-4311-b663-c7b1dad5bb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008060 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f809b0-59ab-47d4-b7d2-081a78d471fd-serving-cert\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008086 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkdq2\" (UniqueName: \"kubernetes.io/projected/450e4026-47c3-4cb0-8c6b-3275bb2942d5-kube-api-access-xkdq2\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008111 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djs2n\" (UniqueName: \"kubernetes.io/projected/c41361de-7942-4b97-97d8-9fd467394b25-kube-api-access-djs2n\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008138 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fb2199e3-75fc-44e3-93ab-205e84134ea3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008164 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9dgd\" (UniqueName: \"kubernetes.io/projected/097d7867-7dee-4a99-9441-10f7f9aa5a76-kube-api-access-q9dgd\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008205 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/450e4026-47c3-4cb0-8c6b-3275bb2942d5-serving-cert\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008238 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.008625 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/199bf3df-657c-4fec-99c8-00abf00d41c0-images\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.009970 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.011241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/450e4026-47c3-4cb0-8c6b-3275bb2942d5-config\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.011874 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-client-ca\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.012700 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-oauth-serving-cert\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.013222 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a465324f-c710-4331-80e5-68b5c0559887-node-pullsecrets\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.013477 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99c26895-5a88-439e-83df-ddb6c9d1a1cb-trusted-ca\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.013926 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-dir\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.014140 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199bf3df-657c-4fec-99c8-00abf00d41c0-config\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.015833 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-image-import-ca\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.016197 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-config\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.016669 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-serving-cert\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.017746 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-encryption-config\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.018050 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-serving-cert\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.018309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c3d286-5003-4b44-81c6-220e491ba838-serving-cert\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.019416 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-audit\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.019648 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.020361 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/450e4026-47c3-4cb0-8c6b-3275bb2942d5-trusted-ca\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.020800 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-trusted-ca-bundle\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.021051 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-service-ca\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.021090 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-config\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.021113 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.021164 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a465324f-c710-4331-80e5-68b5c0559887-audit-dir\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.021491 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.022139 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a4461f7-9f73-409f-b237-3e81429a370c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.022190 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9e976ef-06f6-4284-8fa7-38e61291b75d-config\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.022318 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a4461f7-9f73-409f-b237-3e81429a370c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.022765 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6228036-a923-4c34-ab54-cd2d04d98a5f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.023196 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.023439 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6228036-a923-4c34-ab54-cd2d04d98a5f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.023725 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.023852 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-audit-policies\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.024086 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.024254 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.024482 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.024625 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9e976ef-06f6-4284-8fa7-38e61291b75d-auth-proxy-config\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.024704 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13b81bbb-d493-4b29-a99e-b4dc92d6100e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.024806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a465324f-c710-4331-80e5-68b5c0559887-etcd-serving-ca\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.025069 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.025082 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-config\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.025159 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.025190 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fb2199e3-75fc-44e3-93ab-205e84134ea3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.025290 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-config\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.025828 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.026304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f809b0-59ab-47d4-b7d2-081a78d471fd-service-ca-bundle\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.026482 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13b81bbb-d493-4b29-a99e-b4dc92d6100e-etcd-client\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.026781 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-etcd-client\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.026914 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-oauth-config\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.027312 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2199e3-75fc-44e3-93ab-205e84134ea3-serving-cert\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.027328 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-serving-cert\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.027369 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f809b0-59ab-47d4-b7d2-081a78d471fd-serving-cert\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.027590 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-client-ca\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.028044 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/199bf3df-657c-4fec-99c8-00abf00d41c0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.029047 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99c26895-5a88-439e-83df-ddb6c9d1a1cb-metrics-tls\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.029727 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/450e4026-47c3-4cb0-8c6b-3275bb2942d5-serving-cert\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.029998 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.030290 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41361de-7942-4b97-97d8-9fd467394b25-serving-cert\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.031485 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.032275 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.032730 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-policies\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.032945 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.036885 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-46zfm"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.037629 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mccvx"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.038913 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.039764 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.040724 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.040995 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.044285 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.045347 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.045827 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.046464 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9e976ef-06f6-4284-8fa7-38e61291b75d-machine-approver-tls\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.046509 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/71d8cd5b-9140-4311-b663-c7b1dad5bb60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j6l7l\" (UID: \"71d8cd5b-9140-4311-b663-c7b1dad5bb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.046807 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.046937 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a465324f-c710-4331-80e5-68b5c0559887-encryption-config\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.047289 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.047363 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.049705 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8djqh"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.050166 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.050578 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.051430 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-d9t24"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.052621 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.052803 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-shwnt"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.054468 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rpxm7"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.055512 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.057945 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fnx92"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.057984 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gtmpg"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.059568 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.059687 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.062629 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.062849 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kch26"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.063311 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.069509 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.069644 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.073291 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.074384 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.074471 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bbsld"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.075469 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xf6xd"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.076907 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.077987 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.079311 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.082150 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.088809 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9vnd"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.094178 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mccvx"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.094257 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.096177 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.099997 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.101605 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5gsx"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.102833 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p5bsr"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.104156 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9htst"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.106045 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-f9wfz"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.106863 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.106987 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.107642 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9htst" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.108131 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109095 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-metrics-certs\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109145 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-default-certificate\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943de0dc-b19a-4411-afc4-9e7a82a771bf-service-ca-bundle\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109253 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9dgd\" (UniqueName: \"kubernetes.io/projected/097d7867-7dee-4a99-9441-10f7f9aa5a76-kube-api-access-q9dgd\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/097d7867-7dee-4a99-9441-10f7f9aa5a76-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109430 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-stats-auth\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109507 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqlg\" (UniqueName: \"kubernetes.io/projected/943de0dc-b19a-4411-afc4-9e7a82a771bf-kube-api-access-rfqlg\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109620 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097d7867-7dee-4a99-9441-10f7f9aa5a76-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.109872 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.112049 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-46zfm"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.112694 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.114491 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.115812 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.118014 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.120180 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gtmpg"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.121711 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9htst"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.123290 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-f9wfz"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.125072 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fcftz"] Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.126133 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.133151 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.153472 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.163581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/097d7867-7dee-4a99-9441-10f7f9aa5a76-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.173175 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.180725 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097d7867-7dee-4a99-9441-10f7f9aa5a76-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.192857 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.224894 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.236430 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.253490 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.273563 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.293050 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.315031 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.322696 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-stats-auth\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.333623 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.344472 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-metrics-certs\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.353460 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.373205 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.385489 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/943de0dc-b19a-4411-afc4-9e7a82a771bf-default-certificate\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.393332 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.401222 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943de0dc-b19a-4411-afc4-9e7a82a771bf-service-ca-bundle\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.413461 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.433274 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.453358 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.476398 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.493004 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.512892 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.533963 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.574204 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.593768 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.615067 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.633322 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.653614 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.673872 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.693791 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.715253 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.734561 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.753233 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.773466 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.793452 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.814169 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.832934 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.853716 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.873631 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.893964 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.914710 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.933867 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.954134 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.974039 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 12:16:32 crc kubenswrapper[4688]: I1125 12:16:32.993571 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.011765 4688 request.go:700] Waited for 1.010654323s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0 Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.013209 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.033822 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.053826 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.074400 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.096965 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.120363 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.134436 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.155349 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.194855 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrlsp\" (UniqueName: \"kubernetes.io/projected/b0f809b0-59ab-47d4-b7d2-081a78d471fd-kube-api-access-wrlsp\") pod \"authentication-operator-69f744f599-shwnt\" (UID: \"b0f809b0-59ab-47d4-b7d2-081a78d471fd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.204419 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.209110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2rjc\" (UniqueName: \"kubernetes.io/projected/13b81bbb-d493-4b29-a99e-b4dc92d6100e-kube-api-access-q2rjc\") pod \"apiserver-7bbb656c7d-bndd9\" (UID: \"13b81bbb-d493-4b29-a99e-b4dc92d6100e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.235284 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmc98\" (UniqueName: \"kubernetes.io/projected/0a4461f7-9f73-409f-b237-3e81429a370c-kube-api-access-qmc98\") pod \"openshift-apiserver-operator-796bbdcf4f-tr842\" (UID: \"0a4461f7-9f73-409f-b237-3e81429a370c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.248615 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7j6\" (UniqueName: \"kubernetes.io/projected/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-kube-api-access-wf7j6\") pod \"oauth-openshift-558db77b4-6pbkn\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.269954 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh6pq\" (UniqueName: \"kubernetes.io/projected/a465324f-c710-4331-80e5-68b5c0559887-kube-api-access-dh6pq\") pod \"apiserver-76f77b778f-rpxm7\" (UID: \"a465324f-c710-4331-80e5-68b5c0559887\") " pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.288931 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpdq9\" (UniqueName: \"kubernetes.io/projected/199bf3df-657c-4fec-99c8-00abf00d41c0-kube-api-access-bpdq9\") pod \"machine-api-operator-5694c8668f-8djqh\" (UID: \"199bf3df-657c-4fec-99c8-00abf00d41c0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.318349 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2l5d\" (UniqueName: \"kubernetes.io/projected/c6228036-a923-4c34-ab54-cd2d04d98a5f-kube-api-access-l2l5d\") pod \"kube-storage-version-migrator-operator-b67b599dd-lgtc8\" (UID: \"c6228036-a923-4c34-ab54-cd2d04d98a5f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.321435 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.336894 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2zjv\" (UniqueName: \"kubernetes.io/projected/fb2199e3-75fc-44e3-93ab-205e84134ea3-kube-api-access-z2zjv\") pod \"openshift-config-operator-7777fb866f-c6wp5\" (UID: \"fb2199e3-75fc-44e3-93ab-205e84134ea3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.350238 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99c26895-5a88-439e-83df-ddb6c9d1a1cb-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.354834 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.370693 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.380825 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.389052 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcmnh\" (UniqueName: \"kubernetes.io/projected/4888de7e-b0ae-4682-a404-545a9ba9cd82-kube-api-access-jcmnh\") pod \"console-f9d7485db-xf6xd\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.409939 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.414432 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfhlk\" (UniqueName: \"kubernetes.io/projected/8c1c5541-855e-4672-9bfe-080fdd2a42f1-kube-api-access-xfhlk\") pod \"downloads-7954f5f757-d9t24\" (UID: \"8c1c5541-855e-4672-9bfe-080fdd2a42f1\") " pod="openshift-console/downloads-7954f5f757-d9t24" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.423909 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.429152 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkfhq\" (UniqueName: \"kubernetes.io/projected/14c3d286-5003-4b44-81c6-220e491ba838-kube-api-access-gkfhq\") pod \"controller-manager-879f6c89f-7zpmk\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.441578 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-shwnt"] Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.450240 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr6wq\" (UniqueName: \"kubernetes.io/projected/c9eb1fd5-1d27-4d86-8ec4-4223513209d5-kube-api-access-tr6wq\") pod \"cluster-image-registry-operator-dc59b4c8b-stpdt\" (UID: \"c9eb1fd5-1d27-4d86-8ec4-4223513209d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.457217 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.482794 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9cw8\" (UniqueName: \"kubernetes.io/projected/99c26895-5a88-439e-83df-ddb6c9d1a1cb-kube-api-access-g9cw8\") pod \"ingress-operator-5b745b69d9-6vn95\" (UID: \"99c26895-5a88-439e-83df-ddb6c9d1a1cb\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.492418 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2snzk\" (UniqueName: \"kubernetes.io/projected/b9e976ef-06f6-4284-8fa7-38e61291b75d-kube-api-access-2snzk\") pod \"machine-approver-56656f9798-z98wq\" (UID: \"b9e976ef-06f6-4284-8fa7-38e61291b75d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.516799 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djs2n\" (UniqueName: \"kubernetes.io/projected/c41361de-7942-4b97-97d8-9fd467394b25-kube-api-access-djs2n\") pod \"route-controller-manager-6576b87f9c-67wqq\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.517933 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-d9t24" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.545132 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8g8z\" (UniqueName: \"kubernetes.io/projected/71d8cd5b-9140-4311-b663-c7b1dad5bb60-kube-api-access-k8g8z\") pod \"cluster-samples-operator-665b6dd947-j6l7l\" (UID: \"71d8cd5b-9140-4311-b663-c7b1dad5bb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.550320 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkdq2\" (UniqueName: \"kubernetes.io/projected/450e4026-47c3-4cb0-8c6b-3275bb2942d5-kube-api-access-xkdq2\") pod \"console-operator-58897d9998-kch26\" (UID: \"450e4026-47c3-4cb0-8c6b-3275bb2942d5\") " pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.553111 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.576888 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.577393 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.582316 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.583062 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8"] Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.596600 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.596707 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.606267 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.612620 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.614683 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.619144 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.628113 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.633204 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.633995 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.654217 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.674839 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.695642 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.725938 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9"] Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.736348 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.750591 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8djqh"] Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.753813 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.769417 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842"] Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.774240 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.780310 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" Nov 25 12:16:33 crc kubenswrapper[4688]: W1125 12:16:33.780792 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod199bf3df_657c_4fec_99c8_00abf00d41c0.slice/crio-d9c18aa0591e4a1c2e3a43cb6be63db4d7b4fca369a0b6e74bf15c3e649e3c50 WatchSource:0}: Error finding container d9c18aa0591e4a1c2e3a43cb6be63db4d7b4fca369a0b6e74bf15c3e649e3c50: Status 404 returned error can't find the container with id d9c18aa0591e4a1c2e3a43cb6be63db4d7b4fca369a0b6e74bf15c3e649e3c50 Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.786472 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.794319 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 12:16:33 crc kubenswrapper[4688]: W1125 12:16:33.804483 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a4461f7_9f73_409f_b237_3e81429a370c.slice/crio-00343a8b7ed21380b34f71d74b26f76876b1773ccba15e5903a80fb99d2ab8ce WatchSource:0}: Error finding container 00343a8b7ed21380b34f71d74b26f76876b1773ccba15e5903a80fb99d2ab8ce: Status 404 returned error can't find the container with id 00343a8b7ed21380b34f71d74b26f76876b1773ccba15e5903a80fb99d2ab8ce Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.817696 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.835038 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.855882 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.873843 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.881644 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rpxm7"] Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.896127 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.904956 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6pbkn"] Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.916228 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.935482 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.953889 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.977316 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 12:16:33 crc kubenswrapper[4688]: I1125 12:16:33.998197 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.011879 4688 request.go:700] Waited for 1.95175268s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.015939 4688 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.034894 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.053143 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.075153 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.092855 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.095597 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-d9t24"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.098276 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xf6xd"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.113299 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.133122 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.153116 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.176672 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.179640 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kch26"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.181927 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.212550 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9dgd\" (UniqueName: \"kubernetes.io/projected/097d7867-7dee-4a99-9441-10f7f9aa5a76-kube-api-access-q9dgd\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztp4s\" (UID: \"097d7867-7dee-4a99-9441-10f7f9aa5a76\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.230777 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqlg\" (UniqueName: \"kubernetes.io/projected/943de0dc-b19a-4411-afc4-9e7a82a771bf-kube-api-access-rfqlg\") pod \"router-default-5444994796-hzmsk\" (UID: \"943de0dc-b19a-4411-afc4-9e7a82a771bf\") " pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.233630 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.243708 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.257944 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.259209 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.262211 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.263695 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7zpmk"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.264947 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.270452 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.274441 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.345505 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq"] Nov 25 12:16:34 crc kubenswrapper[4688]: W1125 12:16:34.350869 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9eb1fd5_1d27_4d86_8ec4_4223513209d5.slice/crio-fb217bef53b40963d7becaf94b2be78d3a92745d2d8b1775085113be8e4be6a9 WatchSource:0}: Error finding container fb217bef53b40963d7becaf94b2be78d3a92745d2d8b1775085113be8e4be6a9: Status 404 returned error can't find the container with id fb217bef53b40963d7becaf94b2be78d3a92745d2d8b1775085113be8e4be6a9 Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.357414 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.357469 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c499b055-70b4-4568-9e27-6cd9f38d54d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9vnd\" (UID: \"c499b055-70b4-4568-9e27-6cd9f38d54d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.357565 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6snn\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-kube-api-access-m6snn\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.357610 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpq5\" (UniqueName: \"kubernetes.io/projected/3a9faba3-cc0a-4870-947e-d500a0babe30-kube-api-access-mhpq5\") pod \"dns-operator-744455d44c-bbsld\" (UID: \"3a9faba3-cc0a-4870-947e-d500a0babe30\") " pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.358868 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-registry-certificates\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359031 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-trusted-ca\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359177 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/287e5654-ecac-4340-ad1f-9a307d57de32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359232 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-942p6\" (UniqueName: \"kubernetes.io/projected/c499b055-70b4-4568-9e27-6cd9f38d54d5-kube-api-access-942p6\") pod \"multus-admission-controller-857f4d67dd-p9vnd\" (UID: \"c499b055-70b4-4568-9e27-6cd9f38d54d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-config\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359384 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a9faba3-cc0a-4870-947e-d500a0babe30-metrics-tls\") pod \"dns-operator-744455d44c-bbsld\" (UID: \"3a9faba3-cc0a-4870-947e-d500a0babe30\") " pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359461 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359589 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359737 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/287e5654-ecac-4340-ad1f-9a307d57de32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359782 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-registry-tls\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.359805 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-bound-sa-token\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.359926 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:34.859883771 +0000 UTC m=+144.969512759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: W1125 12:16:34.436729 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c3d286_5003_4b44_81c6_220e491ba838.slice/crio-ea4209c2e9ab0751a22fc70acfee781e733793ec3e3662b33a15047ce29d870c WatchSource:0}: Error finding container ea4209c2e9ab0751a22fc70acfee781e733793ec3e3662b33a15047ce29d870c: Status 404 returned error can't find the container with id ea4209c2e9ab0751a22fc70acfee781e733793ec3e3662b33a15047ce29d870c Nov 25 12:16:34 crc kubenswrapper[4688]: W1125 12:16:34.446480 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc41361de_7942_4b97_97d8_9fd467394b25.slice/crio-d295f9150ef02fa233ede57154926349031146520c3a2b795cfb888dead19dda WatchSource:0}: Error finding container d295f9150ef02fa233ede57154926349031146520c3a2b795cfb888dead19dda: Status 404 returned error can't find the container with id d295f9150ef02fa233ede57154926349031146520c3a2b795cfb888dead19dda Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461153 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461425 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpq5\" (UniqueName: \"kubernetes.io/projected/3a9faba3-cc0a-4870-947e-d500a0babe30-kube-api-access-mhpq5\") pod \"dns-operator-744455d44c-bbsld\" (UID: \"3a9faba3-cc0a-4870-947e-d500a0babe30\") " pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461457 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr6ch\" (UniqueName: \"kubernetes.io/projected/5945db78-57d1-446d-9baa-57543c83ba4b-kube-api-access-hr6ch\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461498 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ea42342-71fc-475e-ade6-027a0e9df527-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461551 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-client\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhk6\" (UniqueName: \"kubernetes.io/projected/ea6c5347-6ca8-4ed2-aed1-543a71ae1e11-kube-api-access-2mhk6\") pod \"package-server-manager-789f6589d5-98k7h\" (UID: \"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461625 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9f84\" (UniqueName: \"kubernetes.io/projected/7952e5c1-f45f-492b-9f7c-b92b2d079994-kube-api-access-z9f84\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461758 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lzlt\" (UniqueName: \"kubernetes.io/projected/694823ec-105f-4183-9dfb-8fa7f414c8ac-kube-api-access-4lzlt\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.461848 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:34.961818934 +0000 UTC m=+145.071447802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461909 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-trusted-ca\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.461962 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xnn\" (UniqueName: \"kubernetes.io/projected/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-kube-api-access-64xnn\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrtr\" (UniqueName: \"kubernetes.io/projected/486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c-kube-api-access-lnrtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm98m\" (UID: \"486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462170 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-node-bootstrap-token\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462191 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-certs\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462216 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462269 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462307 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86dafa98-d11c-4c07-96ae-917b679a1ccb-config\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462341 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23b58602-0c57-4bbd-8ea8-977e541d1e20-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462358 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-srv-cert\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462506 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/694823ec-105f-4183-9dfb-8fa7f414c8ac-secret-volume\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.462557 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:34.962543673 +0000 UTC m=+145.072172541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glnf\" (UniqueName: \"kubernetes.io/projected/b16521c9-4940-4ab4-acda-cec2b56f285e-kube-api-access-9glnf\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1166912-b9ca-439f-b012-8312d1b51b0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462653 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1166912-b9ca-439f-b012-8312d1b51b0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462750 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-mountpoint-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462802 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-plugins-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462828 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462886 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-images\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462916 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-proxy-tls\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462945 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-bound-sa-token\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.462969 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/668dbb02-474b-4540-b12b-f7d316925e42-cert\") pod \"ingress-canary-9htst\" (UID: \"668dbb02-474b-4540-b12b-f7d316925e42\") " pod="openshift-ingress-canary/ingress-canary-9htst" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463118 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rddf\" (UniqueName: \"kubernetes.io/projected/4dd7b724-e84c-4638-b5cd-acd5e52aa110-kube-api-access-4rddf\") pod \"migrator-59844c95c7-q7hm8\" (UID: \"4dd7b724-e84c-4638-b5cd-acd5e52aa110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463209 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6snn\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-kube-api-access-m6snn\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463232 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5c5eb284-ad88-40eb-b21e-fe2d643e833d-signing-key\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463253 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b58602-0c57-4bbd-8ea8-977e541d1e20-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463298 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ea42342-71fc-475e-ade6-027a0e9df527-proxy-tls\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463408 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t8mk\" (UniqueName: \"kubernetes.io/projected/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-kube-api-access-6t8mk\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463452 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ae06fe-7606-45ad-ace3-2939ba858156-config-volume\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463475 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-srv-cert\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463554 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463600 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463619 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5945db78-57d1-446d-9baa-57543c83ba4b-serving-cert\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463637 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-registration-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463677 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwzv\" (UniqueName: \"kubernetes.io/projected/668dbb02-474b-4540-b12b-f7d316925e42-kube-api-access-tmwzv\") pod \"ingress-canary-9htst\" (UID: \"668dbb02-474b-4540-b12b-f7d316925e42\") " pod="openshift-ingress-canary/ingress-canary-9htst" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463715 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d6340120-964d-4b2a-84c4-90490eacfd53-tmpfs\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463747 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjfhl\" (UniqueName: \"kubernetes.io/projected/86dafa98-d11c-4c07-96ae-917b679a1ccb-kube-api-access-pjfhl\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463886 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-registry-certificates\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463944 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b58602-0c57-4bbd-8ea8-977e541d1e20-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.463818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-trusted-ca\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464360 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86dafa98-d11c-4c07-96ae-917b679a1ccb-serving-cert\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464425 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzb8t\" (UniqueName: \"kubernetes.io/projected/8ea42342-71fc-475e-ade6-027a0e9df527-kube-api-access-nzb8t\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464481 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/287e5654-ecac-4340-ad1f-9a307d57de32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464500 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-service-ca\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464540 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-942p6\" (UniqueName: \"kubernetes.io/projected/c499b055-70b4-4568-9e27-6cd9f38d54d5-kube-api-access-942p6\") pod \"multus-admission-controller-857f4d67dd-p9vnd\" (UID: \"c499b055-70b4-4568-9e27-6cd9f38d54d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464560 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5sjz\" (UniqueName: \"kubernetes.io/projected/71ae06fe-7606-45ad-ace3-2939ba858156-kube-api-access-k5sjz\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464579 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdz2f\" (UniqueName: \"kubernetes.io/projected/5c5eb284-ad88-40eb-b21e-fe2d643e833d-kube-api-access-jdz2f\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464607 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gkhr\" (UniqueName: \"kubernetes.io/projected/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-kube-api-access-6gkhr\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464628 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-config\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464649 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-config\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464668 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a9faba3-cc0a-4870-947e-d500a0babe30-metrics-tls\") pod \"dns-operator-744455d44c-bbsld\" (UID: \"3a9faba3-cc0a-4870-947e-d500a0babe30\") " pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464683 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdb28\" (UniqueName: \"kubernetes.io/projected/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-kube-api-access-fdb28\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464714 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464733 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-ca\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464765 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-socket-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464783 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71ae06fe-7606-45ad-ace3-2939ba858156-metrics-tls\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464798 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82b6p\" (UniqueName: \"kubernetes.io/projected/d6340120-964d-4b2a-84c4-90490eacfd53-kube-api-access-82b6p\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464835 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464905 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1166912-b9ca-439f-b012-8312d1b51b0d-config\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464913 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/287e5654-ecac-4340-ad1f-9a307d57de32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464971 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/287e5654-ecac-4340-ad1f-9a307d57de32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.464999 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-csi-data-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.465074 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6c5347-6ca8-4ed2-aed1-543a71ae1e11-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-98k7h\" (UID: \"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.465156 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-registry-tls\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.465428 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-config\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.466115 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-registry-certificates\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.466215 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6340120-964d-4b2a-84c4-90490eacfd53-apiservice-cert\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.466247 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6340120-964d-4b2a-84c4-90490eacfd53-webhook-cert\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.466291 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.466311 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c499b055-70b4-4568-9e27-6cd9f38d54d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9vnd\" (UID: \"c499b055-70b4-4568-9e27-6cd9f38d54d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.466330 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm98m\" (UID: \"486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.466349 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/694823ec-105f-4183-9dfb-8fa7f414c8ac-config-volume\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.466366 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5c5eb284-ad88-40eb-b21e-fe2d643e833d-signing-cabundle\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: W1125 12:16:34.467405 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod943de0dc_b19a_4411_afc4_9e7a82a771bf.slice/crio-1de7055266ba531b6cf78769721a932f83b26c81a3f47577d9da90015b5690c9 WatchSource:0}: Error finding container 1de7055266ba531b6cf78769721a932f83b26c81a3f47577d9da90015b5690c9: Status 404 returned error can't find the container with id 1de7055266ba531b6cf78769721a932f83b26c81a3f47577d9da90015b5690c9 Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.469089 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/287e5654-ecac-4340-ad1f-9a307d57de32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.469807 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-registry-tls\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.470584 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c499b055-70b4-4568-9e27-6cd9f38d54d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9vnd\" (UID: \"c499b055-70b4-4568-9e27-6cd9f38d54d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.483946 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" event={"ID":"0a4461f7-9f73-409f-b237-3e81429a370c","Type":"ContainerStarted","Data":"00343a8b7ed21380b34f71d74b26f76876b1773ccba15e5903a80fb99d2ab8ce"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.487076 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.487236 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a9faba3-cc0a-4870-947e-d500a0babe30-metrics-tls\") pod \"dns-operator-744455d44c-bbsld\" (UID: \"3a9faba3-cc0a-4870-947e-d500a0babe30\") " pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.487368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-d9t24" event={"ID":"8c1c5541-855e-4672-9bfe-080fdd2a42f1","Type":"ContainerStarted","Data":"590c9c8024b4eaf61b7cfe2cb93c287574df02fcae27db61e231e2b5ad320cf8"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.489281 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" event={"ID":"13b81bbb-d493-4b29-a99e-b4dc92d6100e","Type":"ContainerStarted","Data":"db9e0ce11fd2a075088d729023c503c5c17a9ad46959c192457839581edb2314"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.490464 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" event={"ID":"b9e976ef-06f6-4284-8fa7-38e61291b75d","Type":"ContainerStarted","Data":"405dcca21b18347abf5b09be92b31f25aeef6d7a42dda4b7ba2d16d578efc70a"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.491434 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" event={"ID":"c41361de-7942-4b97-97d8-9fd467394b25","Type":"ContainerStarted","Data":"d295f9150ef02fa233ede57154926349031146520c3a2b795cfb888dead19dda"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.493195 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" event={"ID":"c9eb1fd5-1d27-4d86-8ec4-4223513209d5","Type":"ContainerStarted","Data":"fb217bef53b40963d7becaf94b2be78d3a92745d2d8b1775085113be8e4be6a9"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.494204 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" event={"ID":"fb2199e3-75fc-44e3-93ab-205e84134ea3","Type":"ContainerStarted","Data":"ebd4a62d219f17ab3e38ccc0005362dc1976f36b8e797020c50aeeefd61b810e"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.496288 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" event={"ID":"199bf3df-657c-4fec-99c8-00abf00d41c0","Type":"ContainerStarted","Data":"d9c18aa0591e4a1c2e3a43cb6be63db4d7b4fca369a0b6e74bf15c3e649e3c50"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.498035 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xf6xd" event={"ID":"4888de7e-b0ae-4682-a404-545a9ba9cd82","Type":"ContainerStarted","Data":"851594b7798968c125f5226aeab9f9e361e1b470d7ba8cd0c4334950d6d29d09"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.499241 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" event={"ID":"b0f809b0-59ab-47d4-b7d2-081a78d471fd","Type":"ContainerStarted","Data":"1c2f4b1ff26d1a29d40d8d441b0d0759dcaa7effd283e185c0c37e341f1113a0"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.499276 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" event={"ID":"b0f809b0-59ab-47d4-b7d2-081a78d471fd","Type":"ContainerStarted","Data":"4b19c7820b96750f28579446bfecba92155f1b504a93ed1017bf7fe80ffd9e54"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.499909 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" event={"ID":"14c3d286-5003-4b44-81c6-220e491ba838","Type":"ContainerStarted","Data":"ea4209c2e9ab0751a22fc70acfee781e733793ec3e3662b33a15047ce29d870c"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.500623 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" event={"ID":"99c26895-5a88-439e-83df-ddb6c9d1a1cb","Type":"ContainerStarted","Data":"0e41eb7693787f6d011eed5a88540c413c8fc3849c826d56d1c31a10f6c36eb1"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.501444 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" event={"ID":"c6228036-a923-4c34-ab54-cd2d04d98a5f","Type":"ContainerStarted","Data":"e6954110506534dcbae2602f0459cb923e4811249c8180c6572d04f0cdd9e083"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.501475 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" event={"ID":"c6228036-a923-4c34-ab54-cd2d04d98a5f","Type":"ContainerStarted","Data":"8cd49ab8f2969fc19e0c1e7d6592fbe7d77a1d49a235656321864acc9d9fc886"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.502595 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" event={"ID":"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4","Type":"ContainerStarted","Data":"d4d9361a04bcd32df1a3200e3ff84229b94c003ab61cea2d819a9c3b8174f395"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.503133 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kch26" event={"ID":"450e4026-47c3-4cb0-8c6b-3275bb2942d5","Type":"ContainerStarted","Data":"e3dd4c65cf5d663d987b04a4c889587f1cf48c822ab0fdd439450007f2b8a0de"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.503627 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hzmsk" event={"ID":"943de0dc-b19a-4411-afc4-9e7a82a771bf","Type":"ContainerStarted","Data":"1de7055266ba531b6cf78769721a932f83b26c81a3f47577d9da90015b5690c9"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.504488 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" event={"ID":"a465324f-c710-4331-80e5-68b5c0559887","Type":"ContainerStarted","Data":"35f1fbc78bbe1389ba3c3f812a211628d12fa2cd1c190d9402d6caca8a1fc576"} Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.508491 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpq5\" (UniqueName: \"kubernetes.io/projected/3a9faba3-cc0a-4870-947e-d500a0babe30-kube-api-access-mhpq5\") pod \"dns-operator-744455d44c-bbsld\" (UID: \"3a9faba3-cc0a-4870-947e-d500a0babe30\") " pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.528106 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-bound-sa-token\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.550676 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.560815 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6snn\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-kube-api-access-m6snn\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.568874 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569093 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-ca\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.569209 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.069174648 +0000 UTC m=+145.178803526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569276 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-socket-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569311 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71ae06fe-7606-45ad-ace3-2939ba858156-metrics-tls\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569346 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82b6p\" (UniqueName: \"kubernetes.io/projected/d6340120-964d-4b2a-84c4-90490eacfd53-kube-api-access-82b6p\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569376 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1166912-b9ca-439f-b012-8312d1b51b0d-config\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569406 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569439 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-csi-data-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569474 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6c5347-6ca8-4ed2-aed1-543a71ae1e11-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-98k7h\" (UID: \"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569562 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6340120-964d-4b2a-84c4-90490eacfd53-apiservice-cert\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569594 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6340120-964d-4b2a-84c4-90490eacfd53-webhook-cert\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569632 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5c5eb284-ad88-40eb-b21e-fe2d643e833d-signing-cabundle\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569666 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm98m\" (UID: \"486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/694823ec-105f-4183-9dfb-8fa7f414c8ac-config-volume\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569737 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr6ch\" (UniqueName: \"kubernetes.io/projected/5945db78-57d1-446d-9baa-57543c83ba4b-kube-api-access-hr6ch\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569773 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-client\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569804 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ea42342-71fc-475e-ade6-027a0e9df527-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569845 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhk6\" (UniqueName: \"kubernetes.io/projected/ea6c5347-6ca8-4ed2-aed1-543a71ae1e11-kube-api-access-2mhk6\") pod \"package-server-manager-789f6589d5-98k7h\" (UID: \"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569888 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9f84\" (UniqueName: \"kubernetes.io/projected/7952e5c1-f45f-492b-9f7c-b92b2d079994-kube-api-access-z9f84\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569926 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lzlt\" (UniqueName: \"kubernetes.io/projected/694823ec-105f-4183-9dfb-8fa7f414c8ac-kube-api-access-4lzlt\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.569966 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xnn\" (UniqueName: \"kubernetes.io/projected/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-kube-api-access-64xnn\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-certs\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570039 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrtr\" (UniqueName: \"kubernetes.io/projected/486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c-kube-api-access-lnrtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm98m\" (UID: \"486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570072 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-node-bootstrap-token\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570111 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570142 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-socket-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570148 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86dafa98-d11c-4c07-96ae-917b679a1ccb-config\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570198 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570228 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23b58602-0c57-4bbd-8ea8-977e541d1e20-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570251 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-srv-cert\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570262 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1166912-b9ca-439f-b012-8312d1b51b0d-config\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570276 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9glnf\" (UniqueName: \"kubernetes.io/projected/b16521c9-4940-4ab4-acda-cec2b56f285e-kube-api-access-9glnf\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570310 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1166912-b9ca-439f-b012-8312d1b51b0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570336 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1166912-b9ca-439f-b012-8312d1b51b0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570359 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/694823ec-105f-4183-9dfb-8fa7f414c8ac-secret-volume\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570395 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-mountpoint-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570419 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-plugins-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570440 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570462 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-images\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570481 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-proxy-tls\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570513 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/668dbb02-474b-4540-b12b-f7d316925e42-cert\") pod \"ingress-canary-9htst\" (UID: \"668dbb02-474b-4540-b12b-f7d316925e42\") " pod="openshift-ingress-canary/ingress-canary-9htst" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570558 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rddf\" (UniqueName: \"kubernetes.io/projected/4dd7b724-e84c-4638-b5cd-acd5e52aa110-kube-api-access-4rddf\") pod \"migrator-59844c95c7-q7hm8\" (UID: \"4dd7b724-e84c-4638-b5cd-acd5e52aa110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5c5eb284-ad88-40eb-b21e-fe2d643e833d-signing-key\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570610 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b58602-0c57-4bbd-8ea8-977e541d1e20-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570633 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ea42342-71fc-475e-ade6-027a0e9df527-proxy-tls\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.570680 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.070665728 +0000 UTC m=+145.180294606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570702 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ae06fe-7606-45ad-ace3-2939ba858156-config-volume\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570728 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-srv-cert\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570751 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t8mk\" (UniqueName: \"kubernetes.io/projected/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-kube-api-access-6t8mk\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570813 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570834 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5945db78-57d1-446d-9baa-57543c83ba4b-serving-cert\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570855 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-registration-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570878 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwzv\" (UniqueName: \"kubernetes.io/projected/668dbb02-474b-4540-b12b-f7d316925e42-kube-api-access-tmwzv\") pod \"ingress-canary-9htst\" (UID: \"668dbb02-474b-4540-b12b-f7d316925e42\") " pod="openshift-ingress-canary/ingress-canary-9htst" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570912 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d6340120-964d-4b2a-84c4-90490eacfd53-tmpfs\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570937 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjfhl\" (UniqueName: \"kubernetes.io/projected/86dafa98-d11c-4c07-96ae-917b679a1ccb-kube-api-access-pjfhl\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b58602-0c57-4bbd-8ea8-977e541d1e20-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.570992 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86dafa98-d11c-4c07-96ae-917b679a1ccb-serving-cert\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.571018 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzb8t\" (UniqueName: \"kubernetes.io/projected/8ea42342-71fc-475e-ade6-027a0e9df527-kube-api-access-nzb8t\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.571076 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-service-ca\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.574861 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5sjz\" (UniqueName: \"kubernetes.io/projected/71ae06fe-7606-45ad-ace3-2939ba858156-kube-api-access-k5sjz\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.574916 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdz2f\" (UniqueName: \"kubernetes.io/projected/5c5eb284-ad88-40eb-b21e-fe2d643e833d-kube-api-access-jdz2f\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.574952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gkhr\" (UniqueName: \"kubernetes.io/projected/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-kube-api-access-6gkhr\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.574993 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-config\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.575025 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdb28\" (UniqueName: \"kubernetes.io/projected/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-kube-api-access-fdb28\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.575718 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/694823ec-105f-4183-9dfb-8fa7f414c8ac-config-volume\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.578065 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86dafa98-d11c-4c07-96ae-917b679a1ccb-config\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.578456 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-ca\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.578707 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/71ae06fe-7606-45ad-ace3-2939ba858156-metrics-tls\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.579453 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ae06fe-7606-45ad-ace3-2939ba858156-config-volume\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.580455 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-config\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.580479 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-plugins-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.581026 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-942p6\" (UniqueName: \"kubernetes.io/projected/c499b055-70b4-4568-9e27-6cd9f38d54d5-kube-api-access-942p6\") pod \"multus-admission-controller-857f4d67dd-p9vnd\" (UID: \"c499b055-70b4-4568-9e27-6cd9f38d54d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.580637 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-mountpoint-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.581848 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5c5eb284-ad88-40eb-b21e-fe2d643e833d-signing-cabundle\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.582743 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-images\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.585075 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b58602-0c57-4bbd-8ea8-977e541d1e20-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.585722 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.586309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-srv-cert\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.588247 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-registration-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.588251 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.588638 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d6340120-964d-4b2a-84c4-90490eacfd53-tmpfs\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.590223 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm98m\" (UID: \"486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.591683 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6340120-964d-4b2a-84c4-90490eacfd53-apiservice-cert\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.592289 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5945db78-57d1-446d-9baa-57543c83ba4b-serving-cert\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.592876 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ea42342-71fc-475e-ade6-027a0e9df527-proxy-tls\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.593639 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ea42342-71fc-475e-ade6-027a0e9df527-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.593749 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7952e5c1-f45f-492b-9f7c-b92b2d079994-csi-data-dir\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.595562 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-proxy-tls\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.596427 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-client\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.598009 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea6c5347-6ca8-4ed2-aed1-543a71ae1e11-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-98k7h\" (UID: \"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.598073 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.598420 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.601816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b58602-0c57-4bbd-8ea8-977e541d1e20-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.602103 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.602343 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/668dbb02-474b-4540-b12b-f7d316925e42-cert\") pod \"ingress-canary-9htst\" (UID: \"668dbb02-474b-4540-b12b-f7d316925e42\") " pod="openshift-ingress-canary/ingress-canary-9htst" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.603116 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-certs\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.603448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-node-bootstrap-token\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.603806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6340120-964d-4b2a-84c4-90490eacfd53-webhook-cert\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.604755 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5c5eb284-ad88-40eb-b21e-fe2d643e833d-signing-key\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.604796 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1166912-b9ca-439f-b012-8312d1b51b0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.604968 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/694823ec-105f-4183-9dfb-8fa7f414c8ac-secret-volume\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.606541 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3fee5d-cac1-456f-a6f0-bfb874c9ee26-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tn9r6\" (UID: \"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.606993 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-srv-cert\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.625636 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s"] Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.637471 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glnf\" (UniqueName: \"kubernetes.io/projected/b16521c9-4940-4ab4-acda-cec2b56f285e-kube-api-access-9glnf\") pod \"marketplace-operator-79b997595-g5gsx\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.648965 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23b58602-0c57-4bbd-8ea8-977e541d1e20-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj6gd\" (UID: \"23b58602-0c57-4bbd-8ea8-977e541d1e20\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.661720 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.665601 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr6ch\" (UniqueName: \"kubernetes.io/projected/5945db78-57d1-446d-9baa-57543c83ba4b-kube-api-access-hr6ch\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.676806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5945db78-57d1-446d-9baa-57543c83ba4b-etcd-service-ca\") pod \"etcd-operator-b45778765-46zfm\" (UID: \"5945db78-57d1-446d-9baa-57543c83ba4b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.679797 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.679895 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86dafa98-d11c-4c07-96ae-917b679a1ccb-serving-cert\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.680278 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.180251261 +0000 UTC m=+145.289880179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.686726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82b6p\" (UniqueName: \"kubernetes.io/projected/d6340120-964d-4b2a-84c4-90490eacfd53-kube-api-access-82b6p\") pod \"packageserver-d55dfcdfc-srlbr\" (UID: \"d6340120-964d-4b2a-84c4-90490eacfd53\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.691740 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.709736 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrtr\" (UniqueName: \"kubernetes.io/projected/486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c-kube-api-access-lnrtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm98m\" (UID: \"486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.734967 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lzlt\" (UniqueName: \"kubernetes.io/projected/694823ec-105f-4183-9dfb-8fa7f414c8ac-kube-api-access-4lzlt\") pod \"collect-profiles-29401215-sc2w6\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.760610 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdb28\" (UniqueName: \"kubernetes.io/projected/7bec1ff7-ef5e-43a5-bfd8-e114d72419a4-kube-api-access-fdb28\") pod \"machine-config-server-fcftz\" (UID: \"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4\") " pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.775711 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xnn\" (UniqueName: \"kubernetes.io/projected/a4358b47-cc59-4e24-9f91-b6c7fb9088ff-kube-api-access-64xnn\") pod \"catalog-operator-68c6474976-br8hp\" (UID: \"a4358b47-cc59-4e24-9f91-b6c7fb9088ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.781543 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.781913 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.281901446 +0000 UTC m=+145.391530314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.789748 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gkhr\" (UniqueName: \"kubernetes.io/projected/c7cda2db-3b88-4ba9-9e9d-87f9d57e052e-kube-api-access-6gkhr\") pod \"olm-operator-6b444d44fb-7vwpg\" (UID: \"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.806542 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdz2f\" (UniqueName: \"kubernetes.io/projected/5c5eb284-ad88-40eb-b21e-fe2d643e833d-kube-api-access-jdz2f\") pod \"service-ca-9c57cc56f-p5bsr\" (UID: \"5c5eb284-ad88-40eb-b21e-fe2d643e833d\") " pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.840684 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9f84\" (UniqueName: \"kubernetes.io/projected/7952e5c1-f45f-492b-9f7c-b92b2d079994-kube-api-access-z9f84\") pod \"csi-hostpathplugin-gtmpg\" (UID: \"7952e5c1-f45f-492b-9f7c-b92b2d079994\") " pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.847166 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1166912-b9ca-439f-b012-8312d1b51b0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dmnhw\" (UID: \"a1166912-b9ca-439f-b012-8312d1b51b0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.865308 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.880148 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.882953 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.883151 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.383120001 +0000 UTC m=+145.492748899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.883464 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.883893 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.38387924 +0000 UTC m=+145.493508218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.890909 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwzv\" (UniqueName: \"kubernetes.io/projected/668dbb02-474b-4540-b12b-f7d316925e42-kube-api-access-tmwzv\") pod \"ingress-canary-9htst\" (UID: \"668dbb02-474b-4540-b12b-f7d316925e42\") " pod="openshift-ingress-canary/ingress-canary-9htst" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.893617 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.900101 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.906134 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.909720 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjfhl\" (UniqueName: \"kubernetes.io/projected/86dafa98-d11c-4c07-96ae-917b679a1ccb-kube-api-access-pjfhl\") pod \"service-ca-operator-777779d784-mccvx\" (UID: \"86dafa98-d11c-4c07-96ae-917b679a1ccb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.929234 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t8mk\" (UniqueName: \"kubernetes.io/projected/0d91f3b3-e871-4f7d-a3c1-35c363077c3a-kube-api-access-6t8mk\") pod \"machine-config-operator-74547568cd-9zxpp\" (UID: \"0d91f3b3-e871-4f7d-a3c1-35c363077c3a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.929554 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.937365 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzb8t\" (UniqueName: \"kubernetes.io/projected/8ea42342-71fc-475e-ade6-027a0e9df527-kube-api-access-nzb8t\") pod \"machine-config-controller-84d6567774-7ckdz\" (UID: \"8ea42342-71fc-475e-ade6-027a0e9df527\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.938024 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:34 crc kubenswrapper[4688]: W1125 12:16:34.942196 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod097d7867_7dee_4a99_9441_10f7f9aa5a76.slice/crio-f2fd5bf27d4c0ef59b0c2ea44052531ef80ebdd59296d080b009362b272bd08c WatchSource:0}: Error finding container f2fd5bf27d4c0ef59b0c2ea44052531ef80ebdd59296d080b009362b272bd08c: Status 404 returned error can't find the container with id f2fd5bf27d4c0ef59b0c2ea44052531ef80ebdd59296d080b009362b272bd08c Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.947053 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5sjz\" (UniqueName: \"kubernetes.io/projected/71ae06fe-7606-45ad-ace3-2939ba858156-kube-api-access-k5sjz\") pod \"dns-default-f9wfz\" (UID: \"71ae06fe-7606-45ad-ace3-2939ba858156\") " pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.954199 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.970129 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.984005 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.984357 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:34 crc kubenswrapper[4688]: E1125 12:16:34.984843 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.484824197 +0000 UTC m=+145.594453065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:34 crc kubenswrapper[4688]: I1125 12:16:34.997781 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.002686 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rddf\" (UniqueName: \"kubernetes.io/projected/4dd7b724-e84c-4638-b5cd-acd5e52aa110-kube-api-access-4rddf\") pod \"migrator-59844c95c7-q7hm8\" (UID: \"4dd7b724-e84c-4638-b5cd-acd5e52aa110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.004875 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.014084 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhk6\" (UniqueName: \"kubernetes.io/projected/ea6c5347-6ca8-4ed2-aed1-543a71ae1e11-kube-api-access-2mhk6\") pod \"package-server-manager-789f6589d5-98k7h\" (UID: \"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.020158 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.034915 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.044553 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9htst" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.048105 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fcftz" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.085714 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.086095 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.586077673 +0000 UTC m=+145.695706541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.154915 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bbsld"] Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.186038 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.187998 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.188456 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.688437907 +0000 UTC m=+145.798066785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.245857 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.278944 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.289156 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.289493 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.789481227 +0000 UTC m=+145.899110095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.374839 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd"] Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.391495 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.391687 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.891670577 +0000 UTC m=+146.001299445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.391946 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.392272 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.892255741 +0000 UTC m=+146.001884609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: W1125 12:16:35.397242 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bec1ff7_ef5e_43a5_bfd8_e114d72419a4.slice/crio-43581c1935225c537c37cb358fe447d0fefc18c50fcc651a5d5389f908b91114 WatchSource:0}: Error finding container 43581c1935225c537c37cb358fe447d0fefc18c50fcc651a5d5389f908b91114: Status 404 returned error can't find the container with id 43581c1935225c537c37cb358fe447d0fefc18c50fcc651a5d5389f908b91114 Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.418027 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p5bsr"] Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.430199 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5gsx"] Nov 25 12:16:35 crc kubenswrapper[4688]: W1125 12:16:35.496143 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b58602_0c57_4bbd_8ea8_977e541d1e20.slice/crio-085ad0f3f0024d6b0a47d434fb20c1017d467c4e56acb5df4ef526fa9963b694 WatchSource:0}: Error finding container 085ad0f3f0024d6b0a47d434fb20c1017d467c4e56acb5df4ef526fa9963b694: Status 404 returned error can't find the container with id 085ad0f3f0024d6b0a47d434fb20c1017d467c4e56acb5df4ef526fa9963b694 Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.497384 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.498153 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:35.998125258 +0000 UTC m=+146.107754136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.535635 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg"] Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.554601 4688 generic.go:334] "Generic (PLEG): container finished" podID="fb2199e3-75fc-44e3-93ab-205e84134ea3" containerID="acb3ae0b185e785fc4a8f9047428a2238d5b743088869b3607c160218e6fcf3f" exitCode=0 Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.554667 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" event={"ID":"fb2199e3-75fc-44e3-93ab-205e84134ea3","Type":"ContainerDied","Data":"acb3ae0b185e785fc4a8f9047428a2238d5b743088869b3607c160218e6fcf3f"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.556876 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" event={"ID":"199bf3df-657c-4fec-99c8-00abf00d41c0","Type":"ContainerStarted","Data":"40c90494d7054d2eaaadd08e27184e68829e5ee11c02a0718337c4941ba91d7b"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.584091 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp"] Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.599727 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" event={"ID":"097d7867-7dee-4a99-9441-10f7f9aa5a76","Type":"ContainerStarted","Data":"6937d1affe44a842c8715570e86199a2b51ce57eb0ae899f8ee60c497e23a4e5"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.599757 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" event={"ID":"097d7867-7dee-4a99-9441-10f7f9aa5a76","Type":"ContainerStarted","Data":"f2fd5bf27d4c0ef59b0c2ea44052531ef80ebdd59296d080b009362b272bd08c"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.602943 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.603698 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.103684945 +0000 UTC m=+146.213313813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.614490 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kch26" event={"ID":"450e4026-47c3-4cb0-8c6b-3275bb2942d5","Type":"ContainerStarted","Data":"3b62a787a92f416a089690d422bbdd6adc283c548052e77d91765e13c303091b"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.616475 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.649218 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw"] Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.658326 4688 generic.go:334] "Generic (PLEG): container finished" podID="a465324f-c710-4331-80e5-68b5c0559887" containerID="e9ed4337f9ce4290c73f1e74ed9802eaeffaba638d30393290c923cfdafeb2da" exitCode=0 Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.659791 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" event={"ID":"a465324f-c710-4331-80e5-68b5c0559887","Type":"ContainerDied","Data":"e9ed4337f9ce4290c73f1e74ed9802eaeffaba638d30393290c923cfdafeb2da"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.691087 4688 generic.go:334] "Generic (PLEG): container finished" podID="13b81bbb-d493-4b29-a99e-b4dc92d6100e" containerID="ee030602cc1ba529c64b3579f83845b0462da4b81b63ac05c5422f3f9a713bf3" exitCode=0 Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.691154 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" event={"ID":"13b81bbb-d493-4b29-a99e-b4dc92d6100e","Type":"ContainerDied","Data":"ee030602cc1ba529c64b3579f83845b0462da4b81b63ac05c5422f3f9a713bf3"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.696724 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-d9t24" event={"ID":"8c1c5541-855e-4672-9bfe-080fdd2a42f1","Type":"ContainerStarted","Data":"60e9c408fc61f59225d86681fadce9dac779c81ff987d78a822e6dcde485bb92"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.697343 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-d9t24" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.698921 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fcftz" event={"ID":"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4","Type":"ContainerStarted","Data":"43581c1935225c537c37cb358fe447d0fefc18c50fcc651a5d5389f908b91114"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.701477 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" event={"ID":"71d8cd5b-9140-4311-b663-c7b1dad5bb60","Type":"ContainerStarted","Data":"1377ef95841d7b1f4b22a163fe28dd1d4447fb600b70a0dd8074aa0cc244d106"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.702919 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" event={"ID":"b9e976ef-06f6-4284-8fa7-38e61291b75d","Type":"ContainerStarted","Data":"7ab795548afe0da67b17f70975b30d6106420cde122243f3f70185f755d48020"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.704814 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.706169 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.206149282 +0000 UTC m=+146.315778150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.711576 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" event={"ID":"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4","Type":"ContainerStarted","Data":"875f41575a8ed64cea2970207bff7e02a1b73631610dea6a324f1570af8288da"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.713750 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.716630 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" event={"ID":"0a4461f7-9f73-409f-b237-3e81429a370c","Type":"ContainerStarted","Data":"8c3c1ec77cfe789bc9b79dd48a46d17ad8714f6ce511ab1c077aa192c5ac9868"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.729913 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xf6xd" event={"ID":"4888de7e-b0ae-4682-a404-545a9ba9cd82","Type":"ContainerStarted","Data":"756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.732236 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" event={"ID":"3a9faba3-cc0a-4870-947e-d500a0babe30","Type":"ContainerStarted","Data":"d69f0bf5aa794d7b134549595094578fc79d0963376eb41eb3120194a9faca12"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.735758 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" event={"ID":"14c3d286-5003-4b44-81c6-220e491ba838","Type":"ContainerStarted","Data":"244ffaad088c3c6e9535814d40eeb27f146db38e17ec4bc058529a617c3266b6"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.736234 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.740945 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" event={"ID":"c9eb1fd5-1d27-4d86-8ec4-4223513209d5","Type":"ContainerStarted","Data":"95ab895fa3053ae03ad7a3965ebc1a58e1b93f95ccf184861c0bd80084210f13"} Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.800361 4688 patch_prober.go:28] interesting pod/console-operator-58897d9998-kch26 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.800818 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kch26" podUID="450e4026-47c3-4cb0-8c6b-3275bb2942d5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.801281 4688 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7zpmk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.801302 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" podUID="14c3d286-5003-4b44-81c6-220e491ba838" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.801362 4688 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6pbkn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.801375 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" podUID="775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.801417 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-d9t24 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.801428 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d9t24" podUID="8c1c5541-855e-4672-9bfe-080fdd2a42f1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.807808 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.810174 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.31016324 +0000 UTC m=+146.419792108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.909582 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:35 crc kubenswrapper[4688]: E1125 12:16:35.910951 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.410934823 +0000 UTC m=+146.520563691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:35 crc kubenswrapper[4688]: I1125 12:16:35.984215 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-46zfm"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.011677 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.012197 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.512179657 +0000 UTC m=+146.621808525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.113436 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.113701 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.613669209 +0000 UTC m=+146.723298077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.113801 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.114444 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.614436569 +0000 UTC m=+146.724065427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.218229 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.218779 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.219039 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.719025071 +0000 UTC m=+146.828653939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.225839 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-f9wfz"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.228219 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.242413 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.258245 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.260173 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.260222 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 12:16:36 crc kubenswrapper[4688]: W1125 12:16:36.283741 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea6c5347_6ca8_4ed2_aed1_543a71ae1e11.slice/crio-ce11dc582a95991dbebf695549eb10b13610a99dfcf425d1f76971fa7aeadc66 WatchSource:0}: Error finding container ce11dc582a95991dbebf695549eb10b13610a99dfcf425d1f76971fa7aeadc66: Status 404 returned error can't find the container with id ce11dc582a95991dbebf695549eb10b13610a99dfcf425d1f76971fa7aeadc66 Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.320280 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.320754 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.820737148 +0000 UTC m=+146.930366016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.421954 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.422184 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.922156938 +0000 UTC m=+147.031785806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.422380 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.422768 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:36.922761394 +0000 UTC m=+147.032390272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.473473 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" podStartSLOduration=120.473450889 podStartE2EDuration="2m0.473450889s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.450599591 +0000 UTC m=+146.560228469" watchObservedRunningTime="2025-11-25 12:16:36.473450889 +0000 UTC m=+146.583079757" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.491421 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9vnd"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.494320 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mccvx"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.505706 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.511495 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lgtc8" podStartSLOduration=119.511468632 podStartE2EDuration="1m59.511468632s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.487501076 +0000 UTC m=+146.597129944" watchObservedRunningTime="2025-11-25 12:16:36.511468632 +0000 UTC m=+146.621097500" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.520353 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.524484 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.524886 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gtmpg"] Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.524974 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.024957334 +0000 UTC m=+147.134586202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.551067 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6"] Nov 25 12:16:36 crc kubenswrapper[4688]: W1125 12:16:36.554758 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86dafa98_d11c_4c07_96ae_917b679a1ccb.slice/crio-75fbf72e099a7396c038094cc964ba73d7bdb4d35049ca98bae6b8ddf1270ae3 WatchSource:0}: Error finding container 75fbf72e099a7396c038094cc964ba73d7bdb4d35049ca98bae6b8ddf1270ae3: Status 404 returned error can't find the container with id 75fbf72e099a7396c038094cc964ba73d7bdb4d35049ca98bae6b8ddf1270ae3 Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.560237 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.572031 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-kch26" podStartSLOduration=120.572013023 podStartE2EDuration="2m0.572013023s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.570800461 +0000 UTC m=+146.680429339" watchObservedRunningTime="2025-11-25 12:16:36.572013023 +0000 UTC m=+146.681641891" Nov 25 12:16:36 crc kubenswrapper[4688]: W1125 12:16:36.581744 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ea42342_71fc_475e_ade6_027a0e9df527.slice/crio-221808e5d5b045aafc5873f8ccf54faefbe05b5e2b64c0a9cafa71d2f17900ff WatchSource:0}: Error finding container 221808e5d5b045aafc5873f8ccf54faefbe05b5e2b64c0a9cafa71d2f17900ff: Status 404 returned error can't find the container with id 221808e5d5b045aafc5873f8ccf54faefbe05b5e2b64c0a9cafa71d2f17900ff Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.607612 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tr842" podStartSLOduration=120.607573632 podStartE2EDuration="2m0.607573632s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.603622048 +0000 UTC m=+146.713250916" watchObservedRunningTime="2025-11-25 12:16:36.607573632 +0000 UTC m=+146.717202500" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.638416 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.638766 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.138752977 +0000 UTC m=+147.248381845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.639222 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9htst"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.642409 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m"] Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.656884 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-d9t24" podStartSLOduration=120.65686719 podStartE2EDuration="2m0.65686719s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.654436397 +0000 UTC m=+146.764065265" watchObservedRunningTime="2025-11-25 12:16:36.65686719 +0000 UTC m=+146.766496068" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.683921 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" podStartSLOduration=119.683904596 podStartE2EDuration="1m59.683904596s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.682991552 +0000 UTC m=+146.792620420" watchObservedRunningTime="2025-11-25 12:16:36.683904596 +0000 UTC m=+146.793533454" Nov 25 12:16:36 crc kubenswrapper[4688]: W1125 12:16:36.711278 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6340120_964d_4b2a_84c4_90490eacfd53.slice/crio-afc5bc634c4f22b40507f7a35c5b67bdbc6f88936d350f1270b134e8d48d274c WatchSource:0}: Error finding container afc5bc634c4f22b40507f7a35c5b67bdbc6f88936d350f1270b134e8d48d274c: Status 404 returned error can't find the container with id afc5bc634c4f22b40507f7a35c5b67bdbc6f88936d350f1270b134e8d48d274c Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.728787 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-shwnt" podStartSLOduration=120.728765718 podStartE2EDuration="2m0.728765718s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.723956962 +0000 UTC m=+146.833585830" watchObservedRunningTime="2025-11-25 12:16:36.728765718 +0000 UTC m=+146.838394586" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.743482 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.743842 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.243828812 +0000 UTC m=+147.353457670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.770547 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" event={"ID":"0d91f3b3-e871-4f7d-a3c1-35c363077c3a","Type":"ContainerStarted","Data":"bcad473d634ff02de9cac50dbf33bdf813a42b81e8b5e90fdaca7005727b9697"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.782954 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" event={"ID":"7952e5c1-f45f-492b-9f7c-b92b2d079994","Type":"ContainerStarted","Data":"03a4ea3b3733f97d8957d618db83d12bba7f0f7974e578e2b012b2f667d77c2a"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.788573 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" event={"ID":"71d8cd5b-9140-4311-b663-c7b1dad5bb60","Type":"ContainerStarted","Data":"308502be788540d222b09a41c932b9f78781a8eef2458d6409861d848cc4a3a8"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.788612 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" event={"ID":"71d8cd5b-9140-4311-b663-c7b1dad5bb60","Type":"ContainerStarted","Data":"634cc16bcc226cb33e13cb9ed125c7d55b4e548f97e2c85fb90c6302fbf1e370"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.792072 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" event={"ID":"5c5eb284-ad88-40eb-b21e-fe2d643e833d","Type":"ContainerStarted","Data":"e298210c2a2078423f7a2ac49d790aac2f88ef437c6544f642f44b85a605a6b0"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.792111 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" event={"ID":"5c5eb284-ad88-40eb-b21e-fe2d643e833d","Type":"ContainerStarted","Data":"620e4dda8fb50200e9b51ae191ef67e70f90eaf1171df8a8f82f22e436d13269"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.794102 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hzmsk" event={"ID":"943de0dc-b19a-4411-afc4-9e7a82a771bf","Type":"ContainerStarted","Data":"5226871d87dbab87f6ff8a1a748e3aff6877d8f244d8848a57cd3379a37e53d9"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.808233 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" event={"ID":"99c26895-5a88-439e-83df-ddb6c9d1a1cb","Type":"ContainerStarted","Data":"ed50a4fa8b0d90e600efeafec99f8cbd3f8787412a460263792bd192837c2e91"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.808284 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" event={"ID":"99c26895-5a88-439e-83df-ddb6c9d1a1cb","Type":"ContainerStarted","Data":"30a163325789555c20eb9a5e68fd41d3e5d0c4fec4cd0d4dc7b269c31f8167ae"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.808490 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hzmsk" podStartSLOduration=119.808473711 podStartE2EDuration="1m59.808473711s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.808004628 +0000 UTC m=+146.917633496" watchObservedRunningTime="2025-11-25 12:16:36.808473711 +0000 UTC m=+146.918102579" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.816664 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" event={"ID":"c41361de-7942-4b97-97d8-9fd467394b25","Type":"ContainerStarted","Data":"6de4778e098bb54032d08095cf2fc5f887f413c2277d95a1a644def6fc5ce1fd"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.817298 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.819422 4688 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-67wqq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.819472 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" podUID="c41361de-7942-4b97-97d8-9fd467394b25" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.824804 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" event={"ID":"b16521c9-4940-4ab4-acda-cec2b56f285e","Type":"ContainerStarted","Data":"b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.824849 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" event={"ID":"b16521c9-4940-4ab4-acda-cec2b56f285e","Type":"ContainerStarted","Data":"fbc73f735ebdbb559630f43556005b133bf5795fd720644432fd894a6c6188e6"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.825223 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.826149 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" event={"ID":"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e","Type":"ContainerStarted","Data":"f35a20de7e081a12094ce8c526078b447f407b2087e7863861cf69990fad72eb"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.827410 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g5gsx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.827448 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" podUID="b16521c9-4940-4ab4-acda-cec2b56f285e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.828671 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" event={"ID":"fb2199e3-75fc-44e3-93ab-205e84134ea3","Type":"ContainerStarted","Data":"ff92ac9681d9dd54824ea84bd6c5e328a60e1bf719015ab0503873dd963331d2"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.828781 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.830882 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f9wfz" event={"ID":"71ae06fe-7606-45ad-ace3-2939ba858156","Type":"ContainerStarted","Data":"689b0e3e9687e721b1f517bfe0280102b43a0b316346f792245be2dfbaef420d"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.832774 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" event={"ID":"8ea42342-71fc-475e-ade6-027a0e9df527","Type":"ContainerStarted","Data":"221808e5d5b045aafc5873f8ccf54faefbe05b5e2b64c0a9cafa71d2f17900ff"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.845012 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.847098 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.347083239 +0000 UTC m=+147.456712107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.847754 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" event={"ID":"a465324f-c710-4331-80e5-68b5c0559887","Type":"ContainerStarted","Data":"d7d01cd645a7f701c3e57319daa34c9ea9e142d95ca009e4c640967ee79b8246"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.857916 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" event={"ID":"23b58602-0c57-4bbd-8ea8-977e541d1e20","Type":"ContainerStarted","Data":"8d7555a66e7b958b9c70a64ae8e07ad17c35496036b4329734e69082affe5d75"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.857966 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" event={"ID":"23b58602-0c57-4bbd-8ea8-977e541d1e20","Type":"ContainerStarted","Data":"085ad0f3f0024d6b0a47d434fb20c1017d467c4e56acb5df4ef526fa9963b694"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.865414 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" event={"ID":"d6340120-964d-4b2a-84c4-90490eacfd53","Type":"ContainerStarted","Data":"afc5bc634c4f22b40507f7a35c5b67bdbc6f88936d350f1270b134e8d48d274c"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.872992 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" event={"ID":"694823ec-105f-4183-9dfb-8fa7f414c8ac","Type":"ContainerStarted","Data":"e37d7e4f73e6802b1ebed4e9cae0e628afbf6cf3b9eeacdce437b1f3ae34d86d"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.888821 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" event={"ID":"3a9faba3-cc0a-4870-947e-d500a0babe30","Type":"ContainerStarted","Data":"dfd4461071da3e754111509df7c1d5de12c565793f41e1b34f86a88e162b98ce"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.896093 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" event={"ID":"86dafa98-d11c-4c07-96ae-917b679a1ccb","Type":"ContainerStarted","Data":"75fbf72e099a7396c038094cc964ba73d7bdb4d35049ca98bae6b8ddf1270ae3"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.907159 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" event={"ID":"a4358b47-cc59-4e24-9f91-b6c7fb9088ff","Type":"ContainerStarted","Data":"93d9e098f794846c785952f5ddd4805c9b5ae07e6ced794f0f710e077e8ddbbc"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.908854 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xf6xd" podStartSLOduration=120.908838232 podStartE2EDuration="2m0.908838232s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.90795725 +0000 UTC m=+147.017586138" watchObservedRunningTime="2025-11-25 12:16:36.908838232 +0000 UTC m=+147.018467100" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.913767 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" event={"ID":"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26","Type":"ContainerStarted","Data":"6f469b7f34b676ea2547f2b33a91fbf3b2b08a11590ff21a2775953cb3ea8704"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.943580 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztp4s" podStartSLOduration=119.943497878 podStartE2EDuration="1m59.943497878s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:36.942843891 +0000 UTC m=+147.052472769" watchObservedRunningTime="2025-11-25 12:16:36.943497878 +0000 UTC m=+147.053126746" Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.951672 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:36 crc kubenswrapper[4688]: E1125 12:16:36.952804 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.452774851 +0000 UTC m=+147.562403779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.982849 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" event={"ID":"199bf3df-657c-4fec-99c8-00abf00d41c0","Type":"ContainerStarted","Data":"7c141f7a9008a8db336c8327c2420c8edf744044cd14496f1100f6b6562a1de5"} Nov 25 12:16:36 crc kubenswrapper[4688]: I1125 12:16:36.998743 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" event={"ID":"c499b055-70b4-4568-9e27-6cd9f38d54d5","Type":"ContainerStarted","Data":"91e895f4e7e6bd2036e9d98afa493953dd69c1ebda5da30a8808584969b9a0ac"} Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.008009 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" event={"ID":"5945db78-57d1-446d-9baa-57543c83ba4b","Type":"ContainerStarted","Data":"60c6e12b255ad93cb83193c8d71bc7cd806df58201d9e316df893c8020a906e6"} Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.032509 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stpdt" podStartSLOduration=121.032496494 podStartE2EDuration="2m1.032496494s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.031965019 +0000 UTC m=+147.141593887" watchObservedRunningTime="2025-11-25 12:16:37.032496494 +0000 UTC m=+147.142125362" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.055440 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.055856 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.555839553 +0000 UTC m=+147.665468421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.062249 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" event={"ID":"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11","Type":"ContainerStarted","Data":"ce11dc582a95991dbebf695549eb10b13610a99dfcf425d1f76971fa7aeadc66"} Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.070816 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" event={"ID":"a1166912-b9ca-439f-b012-8312d1b51b0d","Type":"ContainerStarted","Data":"f16acaac9b3b0b1fd11d11f6883d08e9d9d2bf09cd2e7f255a0b50b999acbb24"} Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.070864 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" event={"ID":"a1166912-b9ca-439f-b012-8312d1b51b0d","Type":"ContainerStarted","Data":"3a4332de104dac25de84baa8a8ea63fe73718220793370e299a7119b74e6d1c9"} Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.077963 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fcftz" event={"ID":"7bec1ff7-ef5e-43a5-bfd8-e114d72419a4","Type":"ContainerStarted","Data":"437d81ec18840606e2a048cf94cc37e009037bac9c73089bd60b6d88726d598b"} Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.082899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" event={"ID":"4dd7b724-e84c-4638-b5cd-acd5e52aa110","Type":"ContainerStarted","Data":"3e118c6b61034762c4d413a2e8f38bc58e9fd395f0bd0085bd8e08dfa233ac9c"} Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.083848 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-d9t24 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.083878 4688 patch_prober.go:28] interesting pod/console-operator-58897d9998-kch26 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.083905 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kch26" podUID="450e4026-47c3-4cb0-8c6b-3275bb2942d5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.083879 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d9t24" podUID="8c1c5541-855e-4672-9bfe-080fdd2a42f1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.084280 4688 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7zpmk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.084296 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" podUID="14c3d286-5003-4b44-81c6-220e491ba838" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.096273 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" podStartSLOduration=120.096251139 podStartE2EDuration="2m0.096251139s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.093866207 +0000 UTC m=+147.203495075" watchObservedRunningTime="2025-11-25 12:16:37.096251139 +0000 UTC m=+147.205880007" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.134442 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-p5bsr" podStartSLOduration=120.134428587 podStartE2EDuration="2m0.134428587s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.134241631 +0000 UTC m=+147.243870519" watchObservedRunningTime="2025-11-25 12:16:37.134428587 +0000 UTC m=+147.244057455" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.134693 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j6l7l" podStartSLOduration=121.134688653 podStartE2EDuration="2m1.134688653s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.116728294 +0000 UTC m=+147.226357162" watchObservedRunningTime="2025-11-25 12:16:37.134688653 +0000 UTC m=+147.244317521" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.157155 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.157372 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.657337325 +0000 UTC m=+147.766966193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.157547 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.158547 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.658535026 +0000 UTC m=+147.768163894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.197842 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" podStartSLOduration=120.197825003 podStartE2EDuration="2m0.197825003s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.179491394 +0000 UTC m=+147.289120262" watchObservedRunningTime="2025-11-25 12:16:37.197825003 +0000 UTC m=+147.307453871" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.219972 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-8djqh" podStartSLOduration=120.219957101 podStartE2EDuration="2m0.219957101s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.217988869 +0000 UTC m=+147.327617727" watchObservedRunningTime="2025-11-25 12:16:37.219957101 +0000 UTC m=+147.329585969" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.265178 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.287899 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.787880925 +0000 UTC m=+147.897509793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.291226 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:37 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Nov 25 12:16:37 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:37 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.291267 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.317285 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" podStartSLOduration=121.317270943 podStartE2EDuration="2m1.317270943s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.29877426 +0000 UTC m=+147.408403128" watchObservedRunningTime="2025-11-25 12:16:37.317270943 +0000 UTC m=+147.426899811" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.388587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.389084 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.889071318 +0000 UTC m=+147.998700186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.401712 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fcftz" podStartSLOduration=6.4016946279999996 podStartE2EDuration="6.401694628s" podCreationTimestamp="2025-11-25 12:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.401304618 +0000 UTC m=+147.510933486" watchObservedRunningTime="2025-11-25 12:16:37.401694628 +0000 UTC m=+147.511323496" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.402214 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6vn95" podStartSLOduration=120.402208872 podStartE2EDuration="2m0.402208872s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.343770455 +0000 UTC m=+147.453399323" watchObservedRunningTime="2025-11-25 12:16:37.402208872 +0000 UTC m=+147.511837740" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.439978 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dmnhw" podStartSLOduration=120.439960369 podStartE2EDuration="2m0.439960369s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:37.439462565 +0000 UTC m=+147.549091433" watchObservedRunningTime="2025-11-25 12:16:37.439960369 +0000 UTC m=+147.549589237" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.489818 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.490155 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:37.99014069 +0000 UTC m=+148.099769548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.593433 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.593828 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.093810358 +0000 UTC m=+148.203439226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.694810 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.695435 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.195409072 +0000 UTC m=+148.305038010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.695493 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.696066 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.196055989 +0000 UTC m=+148.305684857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.793878 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.796368 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.796924 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.296904474 +0000 UTC m=+148.406533342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:37 crc kubenswrapper[4688]: I1125 12:16:37.898332 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:37 crc kubenswrapper[4688]: E1125 12:16:37.898665 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.398647631 +0000 UTC m=+148.508276499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.002215 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.002427 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.502398072 +0000 UTC m=+148.612026950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.002844 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.003296 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.503283565 +0000 UTC m=+148.612912433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.104013 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.104282 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.604215042 +0000 UTC m=+148.713843910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.104556 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.104892 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.60487985 +0000 UTC m=+148.714508728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.108595 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" event={"ID":"5945db78-57d1-446d-9baa-57543c83ba4b","Type":"ContainerStarted","Data":"76a3383217472ea0b1863b3bdfe3a0c14a22e1a337a39080ab50eb05af06a3dd"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.112144 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" event={"ID":"d6340120-964d-4b2a-84c4-90490eacfd53","Type":"ContainerStarted","Data":"dd7554fb832a4f6b2be87a7b0c125083a665797d7e20ccaa08807f64fe44cfd1"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.112971 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.128825 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" event={"ID":"c499b055-70b4-4568-9e27-6cd9f38d54d5","Type":"ContainerStarted","Data":"b8b61007f227c4ac09ccc523a1e40d77b00dcfac463a4b11aa10b11849c07003"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.130183 4688 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-srlbr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.130239 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" podUID="d6340120-964d-4b2a-84c4-90490eacfd53" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.142705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" event={"ID":"c7cda2db-3b88-4ba9-9e9d-87f9d57e052e","Type":"ContainerStarted","Data":"bb44a5e46caf4cba60efda9521480d4745a25fcf8147d99896d8507b26545e6c"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.143124 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.156860 4688 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7vwpg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.156926 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" podUID="c7cda2db-3b88-4ba9-9e9d-87f9d57e052e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.162366 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9htst" event={"ID":"668dbb02-474b-4540-b12b-f7d316925e42","Type":"ContainerStarted","Data":"87c685c3ad5376552fe8c98567384660b54acf7043110324a6f3e2dccaf1ace2"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.162412 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9htst" event={"ID":"668dbb02-474b-4540-b12b-f7d316925e42","Type":"ContainerStarted","Data":"4ac943205a358282d56eeea7c5c8a6c2fee3162974e12a3091d6720e3b256cc4"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.172499 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" event={"ID":"0d91f3b3-e871-4f7d-a3c1-35c363077c3a","Type":"ContainerStarted","Data":"10b935c22c16449238d5b60b5e40f40191c3ab39ec84932497c9683a688cc193"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.172575 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" event={"ID":"0d91f3b3-e871-4f7d-a3c1-35c363077c3a","Type":"ContainerStarted","Data":"87958d7469b2c4ea28d6b8a819f09e822ae11ff67bf8f598afc8f4a14d30fd28"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.179126 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" event={"ID":"8ea42342-71fc-475e-ade6-027a0e9df527","Type":"ContainerStarted","Data":"c8fd1ad0e1d26690831988c422b88cd945b90b3f643382fb8e913c83692670d9"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.179179 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" event={"ID":"8ea42342-71fc-475e-ade6-027a0e9df527","Type":"ContainerStarted","Data":"fce5acb68b6fb6e42620b3913d33b3cb523ce3844629e1704b6163eaa2e7eaed"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.179160 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" podStartSLOduration=121.17914585 podStartE2EDuration="2m1.17914585s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.178643296 +0000 UTC m=+148.288272174" watchObservedRunningTime="2025-11-25 12:16:38.17914585 +0000 UTC m=+148.288774718" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.180381 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-46zfm" podStartSLOduration=121.180368742 podStartE2EDuration="2m1.180368742s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.150934413 +0000 UTC m=+148.260563291" watchObservedRunningTime="2025-11-25 12:16:38.180368742 +0000 UTC m=+148.289997610" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.188318 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" event={"ID":"4dd7b724-e84c-4638-b5cd-acd5e52aa110","Type":"ContainerStarted","Data":"0d87d14484f0d67a28529ba430296ec6328b1f5a79108be406a4105e1ade507c"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.188368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" event={"ID":"4dd7b724-e84c-4638-b5cd-acd5e52aa110","Type":"ContainerStarted","Data":"8ee7cb227777f10ed1d1f14af02d6bb764addce0ad0562fbc0ce1f7db447a2fb"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.190560 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" event={"ID":"86dafa98-d11c-4c07-96ae-917b679a1ccb","Type":"ContainerStarted","Data":"9e5ac0d89ed175cd1ea027d8d75840e126432e8142467d11136f021efce0e687"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.193422 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" event={"ID":"694823ec-105f-4183-9dfb-8fa7f414c8ac","Type":"ContainerStarted","Data":"a1057226e36ba81ad338e64d5cb85c59996230ee07efc7d6ac6df1f36c24fc46"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.196457 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" event={"ID":"a4358b47-cc59-4e24-9f91-b6c7fb9088ff","Type":"ContainerStarted","Data":"caf67b458a6439ff8e1ecfa0d7911ac718979e0f52b883e7bdbb75924c222991"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.197251 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.208993 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" podStartSLOduration=121.208974809 podStartE2EDuration="2m1.208974809s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.205339844 +0000 UTC m=+148.314968732" watchObservedRunningTime="2025-11-25 12:16:38.208974809 +0000 UTC m=+148.318603697" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.209323 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.210653 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.710634462 +0000 UTC m=+148.820263370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.219779 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" event={"ID":"486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c","Type":"ContainerStarted","Data":"ede801e907702b1a09a40377babaf8bef7222e424140e8cec610b6a9e476a773"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.219834 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" event={"ID":"486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c","Type":"ContainerStarted","Data":"4cb51e66c527908a0597a750fcbe2168534c22f01254f6d6c815b962ace7489c"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.220742 4688 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-br8hp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.220792 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" podUID="a4358b47-cc59-4e24-9f91-b6c7fb9088ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.246014 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f9wfz" event={"ID":"71ae06fe-7606-45ad-ace3-2939ba858156","Type":"ContainerStarted","Data":"cdfc405e27a6156a528252d53ef76fd248c8b22e1de86a19c978bfb38f922a10"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.246065 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-f9wfz" event={"ID":"71ae06fe-7606-45ad-ace3-2939ba858156","Type":"ContainerStarted","Data":"e55c2cf3a6cf8e01b689ff3b3d1cd60cda2d848f8a099aed8e2f81e43876cb7b"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.246099 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.270547 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:38 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Nov 25 12:16:38 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:38 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.270599 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.271329 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" event={"ID":"3a9faba3-cc0a-4870-947e-d500a0babe30","Type":"ContainerStarted","Data":"894329fc5c395047b96294a5639a955578ba62eda990f6737349c628ab68eeca"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.313241 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.318962 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.818938992 +0000 UTC m=+148.928567860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.342124 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" event={"ID":"b9e976ef-06f6-4284-8fa7-38e61291b75d","Type":"ContainerStarted","Data":"8a248a690caea0beaaa5c1307b31a157f89bc94501c19cf79067f2fb5c362eb4"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.344473 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" event={"ID":"a465324f-c710-4331-80e5-68b5c0559887","Type":"ContainerStarted","Data":"78e2a89f909848ddf2355c36a08b67ff037d3788ed8eaf68ac974bd1f84c1fd9"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.345922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" event={"ID":"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11","Type":"ContainerStarted","Data":"76d69c0766577497d4557e44c7421ea4dbe3c4803b947081d6acb218c10ca9a8"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.345950 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" event={"ID":"ea6c5347-6ca8-4ed2-aed1-543a71ae1e11","Type":"ContainerStarted","Data":"a4cfcc1495c9fd65fe2e77f9c7d5b66a7866492e8e27518e6103278669fe7643"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.346285 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.353784 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" event={"ID":"13b81bbb-d493-4b29-a99e-b4dc92d6100e","Type":"ContainerStarted","Data":"a752e9353ae7377b940352a4c5257a758e39a69059a24f2b0b2e872761cd27c0"} Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.357678 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g5gsx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.357732 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" podUID="b16521c9-4940-4ab4-acda-cec2b56f285e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.373907 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.387461 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.390889 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.394856 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" podStartSLOduration=121.394838035 podStartE2EDuration="2m1.394838035s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.394001653 +0000 UTC m=+148.503630521" watchObservedRunningTime="2025-11-25 12:16:38.394838035 +0000 UTC m=+148.504466903" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.396744 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9htst" podStartSLOduration=7.396733354 podStartE2EDuration="7.396733354s" podCreationTimestamp="2025-11-25 12:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.294803891 +0000 UTC m=+148.404432759" watchObservedRunningTime="2025-11-25 12:16:38.396733354 +0000 UTC m=+148.506362222" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.403783 4688 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-bndd9 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.18:8443/livez\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.403842 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" podUID="13b81bbb-d493-4b29-a99e-b4dc92d6100e" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.18:8443/livez\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.410665 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.410947 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.414678 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.415966 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:38.915946656 +0000 UTC m=+149.025575524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.426773 4688 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rpxm7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.426826 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" podUID="a465324f-c710-4331-80e5-68b5c0559887" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.518544 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.522035 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.022021288 +0000 UTC m=+149.131650156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.537786 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mccvx" podStartSLOduration=121.537755819 podStartE2EDuration="2m1.537755819s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.465195783 +0000 UTC m=+148.574824681" watchObservedRunningTime="2025-11-25 12:16:38.537755819 +0000 UTC m=+148.647384687" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.539045 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" podStartSLOduration=98.539038142 podStartE2EDuration="1m38.539038142s" podCreationTimestamp="2025-11-25 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.538276632 +0000 UTC m=+148.647905500" watchObservedRunningTime="2025-11-25 12:16:38.539038142 +0000 UTC m=+148.648667010" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.574468 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ckdz" podStartSLOduration=121.574449547 podStartE2EDuration="2m1.574449547s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.569014865 +0000 UTC m=+148.678643733" watchObservedRunningTime="2025-11-25 12:16:38.574449547 +0000 UTC m=+148.684078415" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.594042 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q7hm8" podStartSLOduration=121.594024758 podStartE2EDuration="2m1.594024758s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.593636799 +0000 UTC m=+148.703265667" watchObservedRunningTime="2025-11-25 12:16:38.594024758 +0000 UTC m=+148.703653626" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.623552 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.624592 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.124560236 +0000 UTC m=+149.234189104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.624777 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.625145 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.125133391 +0000 UTC m=+149.234762259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.636899 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9zxpp" podStartSLOduration=121.636876998 podStartE2EDuration="2m1.636876998s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.632896644 +0000 UTC m=+148.742525512" watchObservedRunningTime="2025-11-25 12:16:38.636876998 +0000 UTC m=+148.746505876" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.706787 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" podStartSLOduration=121.706746624 podStartE2EDuration="2m1.706746624s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.677538611 +0000 UTC m=+148.787167479" watchObservedRunningTime="2025-11-25 12:16:38.706746624 +0000 UTC m=+148.816375502" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.726141 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.726449 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.226434408 +0000 UTC m=+149.336063276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.732229 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z98wq" podStartSLOduration=122.732212309 podStartE2EDuration="2m2.732212309s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.731769307 +0000 UTC m=+148.841398175" watchObservedRunningTime="2025-11-25 12:16:38.732212309 +0000 UTC m=+148.841841177" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.827936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.828271 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.328256617 +0000 UTC m=+149.437885485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.831180 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" podStartSLOduration=122.831159443 podStartE2EDuration="2m2.831159443s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.782507773 +0000 UTC m=+148.892136641" watchObservedRunningTime="2025-11-25 12:16:38.831159443 +0000 UTC m=+148.940788311" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.888320 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bbsld" podStartSLOduration=121.888299026 podStartE2EDuration="2m1.888299026s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.837233043 +0000 UTC m=+148.946861911" watchObservedRunningTime="2025-11-25 12:16:38.888299026 +0000 UTC m=+148.997927894" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.929551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:38 crc kubenswrapper[4688]: E1125 12:16:38.929960 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.429929324 +0000 UTC m=+149.539558192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.934355 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" podStartSLOduration=121.934333319 podStartE2EDuration="2m1.934333319s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.932296356 +0000 UTC m=+149.041925224" watchObservedRunningTime="2025-11-25 12:16:38.934333319 +0000 UTC m=+149.043962187" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.935162 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" podStartSLOduration=121.935154991 podStartE2EDuration="2m1.935154991s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.891567292 +0000 UTC m=+149.001196160" watchObservedRunningTime="2025-11-25 12:16:38.935154991 +0000 UTC m=+149.044783859" Nov 25 12:16:38 crc kubenswrapper[4688]: I1125 12:16:38.975878 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm98m" podStartSLOduration=121.975858394 podStartE2EDuration="2m1.975858394s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:38.970014982 +0000 UTC m=+149.079643860" watchObservedRunningTime="2025-11-25 12:16:38.975858394 +0000 UTC m=+149.085487272" Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.014852 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj6gd" podStartSLOduration=122.014833903 podStartE2EDuration="2m2.014833903s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:39.006656979 +0000 UTC m=+149.116285867" watchObservedRunningTime="2025-11-25 12:16:39.014833903 +0000 UTC m=+149.124462771" Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.031922 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.032281 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.532268428 +0000 UTC m=+149.641897296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.076988 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-f9wfz" podStartSLOduration=8.076971386 podStartE2EDuration="8.076971386s" podCreationTimestamp="2025-11-25 12:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:39.075913768 +0000 UTC m=+149.185542636" watchObservedRunningTime="2025-11-25 12:16:39.076971386 +0000 UTC m=+149.186600254" Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.132908 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.133132 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.633084171 +0000 UTC m=+149.742713039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.133240 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.133801 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.63379005 +0000 UTC m=+149.743418928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.237020 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.237340 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.737324414 +0000 UTC m=+149.846953282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.268087 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:39 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Nov 25 12:16:39 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:39 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.268423 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.338269 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.338648 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.838636232 +0000 UTC m=+149.948265100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.377704 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tn9r6" event={"ID":"fc3fee5d-cac1-456f-a6f0-bfb874c9ee26","Type":"ContainerStarted","Data":"8f1dc8e945949ff6aeae276f689fe6b8a64792f0b6563f10359fc3854e26ef14"} Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.393030 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" event={"ID":"7952e5c1-f45f-492b-9f7c-b92b2d079994","Type":"ContainerStarted","Data":"c24af3483e63c9d3f51989a04496f66dd354c91cdf425ab4460d3e10eb957878"} Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.414980 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" event={"ID":"c499b055-70b4-4568-9e27-6cd9f38d54d5","Type":"ContainerStarted","Data":"34e8477d69a6b5ff9841e06d6fc52e5adfb58271c9a8c0f7d217b739ae706b60"} Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.430190 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7vwpg" Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.433731 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-br8hp" Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.439967 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.440288 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:39.940274406 +0000 UTC m=+150.049903274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.502803 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9vnd" podStartSLOduration=122.50278599 podStartE2EDuration="2m2.50278599s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:39.481852703 +0000 UTC m=+149.591481571" watchObservedRunningTime="2025-11-25 12:16:39.50278599 +0000 UTC m=+149.612414858" Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.541748 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.550491 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.050476826 +0000 UTC m=+150.160105694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.642751 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.644292 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.144275366 +0000 UTC m=+150.253904234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.745151 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.745572 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.245558782 +0000 UTC m=+150.355187660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.846141 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.846515 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.3464896 +0000 UTC m=+150.456118468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:39 crc kubenswrapper[4688]: I1125 12:16:39.948660 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:39 crc kubenswrapper[4688]: E1125 12:16:39.949148 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.449125011 +0000 UTC m=+150.558753879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.051658 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.051914 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.551870695 +0000 UTC m=+150.661499563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.052127 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.052653 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.552644675 +0000 UTC m=+150.662273543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.153283 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.153780 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.653747066 +0000 UTC m=+150.763375934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.171582 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-srlbr" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.255826 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.256129 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.756116531 +0000 UTC m=+150.865745399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.262363 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:40 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Nov 25 12:16:40 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:40 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.262430 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.357313 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.357441 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.857421007 +0000 UTC m=+150.967049875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.357792 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.358083 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.858075515 +0000 UTC m=+150.967704383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.458819 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.459001 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.95897294 +0000 UTC m=+151.068601808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.459327 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.459673 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:40.959665399 +0000 UTC m=+151.069294267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.561093 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.561280 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.061253422 +0000 UTC m=+151.170882290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.561370 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.561543 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.561727 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.061713084 +0000 UTC m=+151.171341952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.561870 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.566670 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.589244 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.610054 4688 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-c6wp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.610088 4688 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-c6wp5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.610126 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" podUID="fb2199e3-75fc-44e3-93ab-205e84134ea3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.610153 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" podUID="fb2199e3-75fc-44e3-93ab-205e84134ea3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.663453 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.663658 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.163630057 +0000 UTC m=+151.273258925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.663787 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.663861 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.663889 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.664207 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.164196712 +0000 UTC m=+151.273825660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.667468 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.670900 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.764816 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.765341 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.265325434 +0000 UTC m=+151.374954302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.866107 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.866586 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.366569629 +0000 UTC m=+151.476198497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.867621 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.899541 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.910814 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 12:16:40 crc kubenswrapper[4688]: I1125 12:16:40.967158 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:40 crc kubenswrapper[4688]: E1125 12:16:40.967466 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.467451294 +0000 UTC m=+151.577080162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.068341 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.068695 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.568682449 +0000 UTC m=+151.678311317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.170261 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.170719 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.670700104 +0000 UTC m=+151.780328972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.186368 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2glp"] Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.193040 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.194227 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2glp"] Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.208788 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.273613 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.273945 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.773932051 +0000 UTC m=+151.883560919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.316495 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:41 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Nov 25 12:16:41 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:41 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.316845 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.349076 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6g9dv"] Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.350057 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.356282 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.366263 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6g9dv"] Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.376963 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.377297 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2ph\" (UniqueName: \"kubernetes.io/projected/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-kube-api-access-pn2ph\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.377348 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-catalog-content\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.377373 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-utilities\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.377550 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.877524218 +0000 UTC m=+151.987153086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.478507 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-utilities\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.478702 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.478748 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rchkb\" (UniqueName: \"kubernetes.io/projected/52724116-f0b6-48c0-9de2-a6dc6ba73524-kube-api-access-rchkb\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.478772 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2ph\" (UniqueName: \"kubernetes.io/projected/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-kube-api-access-pn2ph\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.478794 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-catalog-content\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.478817 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-catalog-content\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.478837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-utilities\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.479238 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-utilities\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.479492 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:41.979479861 +0000 UTC m=+152.089108729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.484396 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" event={"ID":"7952e5c1-f45f-492b-9f7c-b92b2d079994","Type":"ContainerStarted","Data":"46c46e263e7ba17ab4399ec2444a82fbae8038b263d95df52822dc6f3010008a"} Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.485502 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-catalog-content\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.524089 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2ph\" (UniqueName: \"kubernetes.io/projected/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-kube-api-access-pn2ph\") pod \"community-operators-w2glp\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.555942 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lcrsk"] Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.556912 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.563430 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcrsk"] Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.563883 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.596285 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.596649 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-utilities\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.596720 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rchkb\" (UniqueName: \"kubernetes.io/projected/52724116-f0b6-48c0-9de2-a6dc6ba73524-kube-api-access-rchkb\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.596758 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-catalog-content\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.597149 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-catalog-content\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.597217 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:42.097202847 +0000 UTC m=+152.206831715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.598731 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-utilities\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.626948 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rchkb\" (UniqueName: \"kubernetes.io/projected/52724116-f0b6-48c0-9de2-a6dc6ba73524-kube-api-access-rchkb\") pod \"certified-operators-6g9dv\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.701120 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr65f\" (UniqueName: \"kubernetes.io/projected/5e868b43-61d8-4178-9d1a-74ef463cc241-kube-api-access-lr65f\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.701174 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-utilities\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.701284 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.701341 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-catalog-content\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.711752 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:42.211726349 +0000 UTC m=+152.321355217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.720318 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.721250 4688 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.753729 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kmsq"] Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.754324 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3971fa_9838_436e_97b1_be050abea83a.slice/crio-conmon-b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.755003 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.762420 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kmsq"] Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.804123 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.804538 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:42.304502903 +0000 UTC m=+152.414131771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.804945 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.805001 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-catalog-content\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.805033 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrhj6\" (UniqueName: \"kubernetes.io/projected/07268d96-608c-48f2-8ffc-d30182070c75-kube-api-access-vrhj6\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.805073 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-catalog-content\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.805124 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr65f\" (UniqueName: \"kubernetes.io/projected/5e868b43-61d8-4178-9d1a-74ef463cc241-kube-api-access-lr65f\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.805144 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-utilities\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.805178 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-utilities\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.805749 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-utilities\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.806054 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 12:16:42.306044463 +0000 UTC m=+152.415673331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fnx92" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.806689 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-catalog-content\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.844517 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr65f\" (UniqueName: \"kubernetes.io/projected/5e868b43-61d8-4178-9d1a-74ef463cc241-kube-api-access-lr65f\") pod \"community-operators-lcrsk\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.905582 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.905897 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-catalog-content\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.905922 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrhj6\" (UniqueName: \"kubernetes.io/projected/07268d96-608c-48f2-8ffc-d30182070c75-kube-api-access-vrhj6\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.905969 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-utilities\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: E1125 12:16:41.906037 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 12:16:42.406012155 +0000 UTC m=+152.515641023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.906344 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-utilities\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.906721 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-catalog-content\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.928425 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrhj6\" (UniqueName: \"kubernetes.io/projected/07268d96-608c-48f2-8ffc-d30182070c75-kube-api-access-vrhj6\") pod \"certified-operators-8kmsq\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.961670 4688 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T12:16:41.721268989Z","Handler":null,"Name":""} Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.972852 4688 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.972887 4688 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 12:16:41 crc kubenswrapper[4688]: I1125 12:16:41.990014 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2glp"] Nov 25 12:16:42 crc kubenswrapper[4688]: W1125 12:16:42.001146 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc57b3e0f_17f5_42a4_bc38_40f6d101aecf.slice/crio-4640c7f67923c4b9a24899f3de570f30dca70e4a86c41423e7edde49639ab0e2 WatchSource:0}: Error finding container 4640c7f67923c4b9a24899f3de570f30dca70e4a86c41423e7edde49639ab0e2: Status 404 returned error can't find the container with id 4640c7f67923c4b9a24899f3de570f30dca70e4a86c41423e7edde49639ab0e2 Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.008952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.016022 4688 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.016060 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.026968 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.050994 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fnx92\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.112749 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.122020 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.122018 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.227713 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6g9dv"] Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.246604 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcrsk"] Nov 25 12:16:42 crc kubenswrapper[4688]: W1125 12:16:42.252961 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52724116_f0b6_48c0_9de2_a6dc6ba73524.slice/crio-9f3b88444215791232340b269e451dfe659fa69efcacdcd83da22ec9d9d5dce4 WatchSource:0}: Error finding container 9f3b88444215791232340b269e451dfe659fa69efcacdcd83da22ec9d9d5dce4: Status 404 returned error can't find the container with id 9f3b88444215791232340b269e451dfe659fa69efcacdcd83da22ec9d9d5dce4 Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.272053 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:42 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Nov 25 12:16:42 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:42 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.272107 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.336458 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.367104 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kmsq"] Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.512814 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" event={"ID":"7952e5c1-f45f-492b-9f7c-b92b2d079994","Type":"ContainerStarted","Data":"5839f0889f436d9093d2f76bcae36d8a7afe5d6ac01a38cf35ad40734f8d5256"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.512864 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" event={"ID":"7952e5c1-f45f-492b-9f7c-b92b2d079994","Type":"ContainerStarted","Data":"d789684c5a1d52b1e8852aeb2144840f6cb59e77901a55fae08ab7b5cbcb1e37"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.517880 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ec333bdf55ba3edcaf7f106642a5baffbcc4119da4f03f887680bba7e2b7aff7"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.517933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c1c38808e6e494ede58503f316e6b1c77abbb3bcf19402ea0dc48c9cfbac378e"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.518135 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.519467 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmsq" event={"ID":"07268d96-608c-48f2-8ffc-d30182070c75","Type":"ContainerStarted","Data":"3f8983a47166b6b821e425917cf67205fb73d6019c1812de15a2884082b7eab9"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.523417 4688 generic.go:334] "Generic (PLEG): container finished" podID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerID="eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc" exitCode=0 Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.523502 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcrsk" event={"ID":"5e868b43-61d8-4178-9d1a-74ef463cc241","Type":"ContainerDied","Data":"eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.523544 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcrsk" event={"ID":"5e868b43-61d8-4178-9d1a-74ef463cc241","Type":"ContainerStarted","Data":"52562d12b82805ab0a4e6e4b36f46f743b772ba494dc7bac997be4479dec5616"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.528799 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f055556ae0da69a45ad3087c1dd06b05f84731e40c19f70e83b3699acc4e0cb4"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.528858 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3fce9a31c4e0bd1669edfb828a53689b5db67dabd3b4725bfc5cb31a0fe3f8f1"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.529234 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.531662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b6b7d7d8f60df240831a22a2a91b8b8d3fd2702035589225ba2101917634b961"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.531709 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b3d58a3f8a8c4e9100faa40ff75f777ffd04540826ad3ad9a8feded0d61aca4c"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.534645 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gtmpg" podStartSLOduration=11.534627427 podStartE2EDuration="11.534627427s" podCreationTimestamp="2025-11-25 12:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:42.532491222 +0000 UTC m=+152.642120090" watchObservedRunningTime="2025-11-25 12:16:42.534627427 +0000 UTC m=+152.644256295" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.543311 4688 generic.go:334] "Generic (PLEG): container finished" podID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerID="be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1" exitCode=0 Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.543398 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2glp" event={"ID":"c57b3e0f-17f5-42a4-bc38-40f6d101aecf","Type":"ContainerDied","Data":"be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.543426 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2glp" event={"ID":"c57b3e0f-17f5-42a4-bc38-40f6d101aecf","Type":"ContainerStarted","Data":"4640c7f67923c4b9a24899f3de570f30dca70e4a86c41423e7edde49639ab0e2"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.547189 4688 generic.go:334] "Generic (PLEG): container finished" podID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerID="25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46" exitCode=0 Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.547228 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g9dv" event={"ID":"52724116-f0b6-48c0-9de2-a6dc6ba73524","Type":"ContainerDied","Data":"25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.547252 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g9dv" event={"ID":"52724116-f0b6-48c0-9de2-a6dc6ba73524","Type":"ContainerStarted","Data":"9f3b88444215791232340b269e451dfe659fa69efcacdcd83da22ec9d9d5dce4"} Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.566974 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fnx92"] Nov 25 12:16:42 crc kubenswrapper[4688]: W1125 12:16:42.580565 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod287e5654_ecac_4340_ad1f_9a307d57de32.slice/crio-c2803f1393d71d93857840ca8887db40d98d63d28f981c6d0599aaf724343729 WatchSource:0}: Error finding container c2803f1393d71d93857840ca8887db40d98d63d28f981c6d0599aaf724343729: Status 404 returned error can't find the container with id c2803f1393d71d93857840ca8887db40d98d63d28f981c6d0599aaf724343729 Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.619375 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c6wp5" Nov 25 12:16:42 crc kubenswrapper[4688]: I1125 12:16:42.748940 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.143766 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbcb"] Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.145141 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.146951 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.159727 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbcb"] Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.231693 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzk4z\" (UniqueName: \"kubernetes.io/projected/d17b935e-550e-4a26-8974-0d8c70f0657f-kube-api-access-pzk4z\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.231949 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-catalog-content\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.232018 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-utilities\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.262150 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:43 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Nov 25 12:16:43 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:43 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.262237 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.333004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzk4z\" (UniqueName: \"kubernetes.io/projected/d17b935e-550e-4a26-8974-0d8c70f0657f-kube-api-access-pzk4z\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.333142 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-catalog-content\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.333176 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-utilities\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.333677 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-catalog-content\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.334851 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-utilities\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.351857 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzk4z\" (UniqueName: \"kubernetes.io/projected/d17b935e-550e-4a26-8974-0d8c70f0657f-kube-api-access-pzk4z\") pod \"redhat-marketplace-6gbcb\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.389267 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.397414 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bndd9" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.425733 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.432059 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rpxm7" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.470384 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.520012 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-d9t24 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.520055 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-d9t24 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.520073 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-d9t24" podUID="8c1c5541-855e-4672-9bfe-080fdd2a42f1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.520121 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d9t24" podUID="8c1c5541-855e-4672-9bfe-080fdd2a42f1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.568779 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-62cmt"] Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.570049 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.576384 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" event={"ID":"287e5654-ecac-4340-ad1f-9a307d57de32","Type":"ContainerStarted","Data":"7d03b2f5fb3b39d595b9f4ad97a41c57d2c4b5765d2765c31c9cc3bf7402414d"} Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.576446 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" event={"ID":"287e5654-ecac-4340-ad1f-9a307d57de32","Type":"ContainerStarted","Data":"c2803f1393d71d93857840ca8887db40d98d63d28f981c6d0599aaf724343729"} Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.577533 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.592213 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62cmt"] Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.593353 4688 generic.go:334] "Generic (PLEG): container finished" podID="07268d96-608c-48f2-8ffc-d30182070c75" containerID="41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16" exitCode=0 Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.593855 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmsq" event={"ID":"07268d96-608c-48f2-8ffc-d30182070c75","Type":"ContainerDied","Data":"41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16"} Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.596720 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.596971 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.600298 4688 patch_prober.go:28] interesting pod/console-f9d7485db-xf6xd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.600329 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xf6xd" podUID="4888de7e-b0ae-4682-a404-545a9ba9cd82" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.614970 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-kch26" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.659542 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.722483 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" podStartSLOduration=126.72246503 podStartE2EDuration="2m6.72246503s" podCreationTimestamp="2025-11-25 12:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:16:43.718843435 +0000 UTC m=+153.828472313" watchObservedRunningTime="2025-11-25 12:16:43.72246503 +0000 UTC m=+153.832093898" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.742166 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-utilities\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.742504 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-catalog-content\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.742630 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4rps\" (UniqueName: \"kubernetes.io/projected/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-kube-api-access-l4rps\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.847102 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-utilities\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.847171 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-catalog-content\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.847199 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4rps\" (UniqueName: \"kubernetes.io/projected/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-kube-api-access-l4rps\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.847962 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-utilities\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.848161 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-catalog-content\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.881464 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4rps\" (UniqueName: \"kubernetes.io/projected/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-kube-api-access-l4rps\") pod \"redhat-marketplace-62cmt\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.911311 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbcb"] Nov 25 12:16:43 crc kubenswrapper[4688]: I1125 12:16:43.946279 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:16:44 crc kubenswrapper[4688]: W1125 12:16:44.000696 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd17b935e_550e_4a26_8974_0d8c70f0657f.slice/crio-20a5db94837f5f3de9f5858843571885e743872bfdfb840e96681d5e1119a917 WatchSource:0}: Error finding container 20a5db94837f5f3de9f5858843571885e743872bfdfb840e96681d5e1119a917: Status 404 returned error can't find the container with id 20a5db94837f5f3de9f5858843571885e743872bfdfb840e96681d5e1119a917 Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.152668 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.154313 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.160138 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.160410 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.164955 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.258176 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.258742 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13f2c3c-3deb-4354-a34b-6aa446583697-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d13f2c3c-3deb-4354-a34b-6aa446583697\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.259086 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13f2c3c-3deb-4354-a34b-6aa446583697-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d13f2c3c-3deb-4354-a34b-6aa446583697\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.263790 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:44 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Nov 25 12:16:44 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:44 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.263872 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.300278 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62cmt"] Nov 25 12:16:44 crc kubenswrapper[4688]: W1125 12:16:44.319224 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2fdd23a_1f19_466c_bcc4_c75a69ed63f0.slice/crio-d455e2711722786c1ebf32a25e24cbdd3d3cf8c3763ef1eb4e0e241fd5958554 WatchSource:0}: Error finding container d455e2711722786c1ebf32a25e24cbdd3d3cf8c3763ef1eb4e0e241fd5958554: Status 404 returned error can't find the container with id d455e2711722786c1ebf32a25e24cbdd3d3cf8c3763ef1eb4e0e241fd5958554 Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.344196 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ps6n8"] Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.345348 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.347643 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.356119 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ps6n8"] Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.364110 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13f2c3c-3deb-4354-a34b-6aa446583697-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d13f2c3c-3deb-4354-a34b-6aa446583697\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.364202 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13f2c3c-3deb-4354-a34b-6aa446583697-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d13f2c3c-3deb-4354-a34b-6aa446583697\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.364456 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13f2c3c-3deb-4354-a34b-6aa446583697-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d13f2c3c-3deb-4354-a34b-6aa446583697\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.394595 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13f2c3c-3deb-4354-a34b-6aa446583697-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d13f2c3c-3deb-4354-a34b-6aa446583697\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.465551 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82csw\" (UniqueName: \"kubernetes.io/projected/ed58cdcf-2778-4c9f-8ef5-915035ad0800-kube-api-access-82csw\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.465623 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-utilities\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.465671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-catalog-content\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.484794 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.495367 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.496138 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.502205 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.502211 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.506061 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.566512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-utilities\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.566596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-catalog-content\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.566860 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82csw\" (UniqueName: \"kubernetes.io/projected/ed58cdcf-2778-4c9f-8ef5-915035ad0800-kube-api-access-82csw\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.566898 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-utilities\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.567277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-catalog-content\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.586584 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82csw\" (UniqueName: \"kubernetes.io/projected/ed58cdcf-2778-4c9f-8ef5-915035ad0800-kube-api-access-82csw\") pod \"redhat-operators-ps6n8\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.605365 4688 generic.go:334] "Generic (PLEG): container finished" podID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerID="fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad" exitCode=0 Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.605375 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbcb" event={"ID":"d17b935e-550e-4a26-8974-0d8c70f0657f","Type":"ContainerDied","Data":"fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad"} Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.605551 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbcb" event={"ID":"d17b935e-550e-4a26-8974-0d8c70f0657f","Type":"ContainerStarted","Data":"20a5db94837f5f3de9f5858843571885e743872bfdfb840e96681d5e1119a917"} Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.607415 4688 generic.go:334] "Generic (PLEG): container finished" podID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerID="1d8542f2d6345f930ae16de028f69a3649d4b2eae879154d1c52365d8b2177c5" exitCode=0 Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.607492 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62cmt" event={"ID":"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0","Type":"ContainerDied","Data":"1d8542f2d6345f930ae16de028f69a3649d4b2eae879154d1c52365d8b2177c5"} Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.607551 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62cmt" event={"ID":"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0","Type":"ContainerStarted","Data":"d455e2711722786c1ebf32a25e24cbdd3d3cf8c3763ef1eb4e0e241fd5958554"} Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.667386 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.667702 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.667910 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.675314 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.752501 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dvn5d"] Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.753596 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.764240 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvn5d"] Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.774553 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.774772 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.775014 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.792957 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.872069 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.877946 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xdt\" (UniqueName: \"kubernetes.io/projected/b022517b-f6e5-412c-b408-905938e25bbd-kube-api-access-j7xdt\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.877992 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-catalog-content\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.878041 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-utilities\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.978920 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7xdt\" (UniqueName: \"kubernetes.io/projected/b022517b-f6e5-412c-b408-905938e25bbd-kube-api-access-j7xdt\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.979278 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-catalog-content\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.979323 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-utilities\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.980040 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-catalog-content\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.981165 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-utilities\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:44 crc kubenswrapper[4688]: I1125 12:16:44.995650 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7xdt\" (UniqueName: \"kubernetes.io/projected/b022517b-f6e5-412c-b408-905938e25bbd-kube-api-access-j7xdt\") pod \"redhat-operators-dvn5d\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.033606 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ps6n8"] Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.078673 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.234313 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.239362 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 12:16:45 crc kubenswrapper[4688]: W1125 12:16:45.243813 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb4a5cfef_89fa_47bd_9f07_95ab7ced310f.slice/crio-88198c18122ba4b73d9c310983ff774189e12ff8bf7e287fbe2a307339acecbf WatchSource:0}: Error finding container 88198c18122ba4b73d9c310983ff774189e12ff8bf7e287fbe2a307339acecbf: Status 404 returned error can't find the container with id 88198c18122ba4b73d9c310983ff774189e12ff8bf7e287fbe2a307339acecbf Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.276377 4688 patch_prober.go:28] interesting pod/router-default-5444994796-hzmsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 12:16:45 crc kubenswrapper[4688]: [+]has-synced ok Nov 25 12:16:45 crc kubenswrapper[4688]: [+]process-running ok Nov 25 12:16:45 crc kubenswrapper[4688]: healthz check failed Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.276508 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hzmsk" podUID="943de0dc-b19a-4411-afc4-9e7a82a771bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.679889 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d13f2c3c-3deb-4354-a34b-6aa446583697","Type":"ContainerStarted","Data":"1461aa9d76390c2b810b2c698529b073b82e064ec001206c788ffa964cbdd09b"} Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.685902 4688 generic.go:334] "Generic (PLEG): container finished" podID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerID="92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8" exitCode=0 Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.685989 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps6n8" event={"ID":"ed58cdcf-2778-4c9f-8ef5-915035ad0800","Type":"ContainerDied","Data":"92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8"} Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.686013 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps6n8" event={"ID":"ed58cdcf-2778-4c9f-8ef5-915035ad0800","Type":"ContainerStarted","Data":"9319e4e399670e9123968b1306dd5b446017f0849c02bf7dd102a5cbbb938d65"} Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.688744 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b4a5cfef-89fa-47bd-9f07-95ab7ced310f","Type":"ContainerStarted","Data":"88198c18122ba4b73d9c310983ff774189e12ff8bf7e287fbe2a307339acecbf"} Nov 25 12:16:45 crc kubenswrapper[4688]: I1125 12:16:45.731973 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvn5d"] Nov 25 12:16:45 crc kubenswrapper[4688]: W1125 12:16:45.755032 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb022517b_f6e5_412c_b408_905938e25bbd.slice/crio-d6137753ea5186a0a1a417a91dc7c60534865894472817ba38ab5b6f042ba47c WatchSource:0}: Error finding container d6137753ea5186a0a1a417a91dc7c60534865894472817ba38ab5b6f042ba47c: Status 404 returned error can't find the container with id d6137753ea5186a0a1a417a91dc7c60534865894472817ba38ab5b6f042ba47c Nov 25 12:16:46 crc kubenswrapper[4688]: I1125 12:16:46.267334 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:46 crc kubenswrapper[4688]: I1125 12:16:46.270517 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hzmsk" Nov 25 12:16:46 crc kubenswrapper[4688]: I1125 12:16:46.746290 4688 generic.go:334] "Generic (PLEG): container finished" podID="694823ec-105f-4183-9dfb-8fa7f414c8ac" containerID="a1057226e36ba81ad338e64d5cb85c59996230ee07efc7d6ac6df1f36c24fc46" exitCode=0 Nov 25 12:16:46 crc kubenswrapper[4688]: I1125 12:16:46.764617 4688 generic.go:334] "Generic (PLEG): container finished" podID="b022517b-f6e5-412c-b408-905938e25bbd" containerID="c027355f36a5a8ac3a7c0a97d579ec6f2b3a00cfb55bdc48edfd57962250ba99" exitCode=0 Nov 25 12:16:46 crc kubenswrapper[4688]: I1125 12:16:46.769296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" event={"ID":"694823ec-105f-4183-9dfb-8fa7f414c8ac","Type":"ContainerDied","Data":"a1057226e36ba81ad338e64d5cb85c59996230ee07efc7d6ac6df1f36c24fc46"} Nov 25 12:16:46 crc kubenswrapper[4688]: I1125 12:16:46.769337 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvn5d" event={"ID":"b022517b-f6e5-412c-b408-905938e25bbd","Type":"ContainerDied","Data":"c027355f36a5a8ac3a7c0a97d579ec6f2b3a00cfb55bdc48edfd57962250ba99"} Nov 25 12:16:46 crc kubenswrapper[4688]: I1125 12:16:46.769348 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvn5d" event={"ID":"b022517b-f6e5-412c-b408-905938e25bbd","Type":"ContainerStarted","Data":"d6137753ea5186a0a1a417a91dc7c60534865894472817ba38ab5b6f042ba47c"} Nov 25 12:16:47 crc kubenswrapper[4688]: I1125 12:16:47.039323 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-f9wfz" Nov 25 12:16:47 crc kubenswrapper[4688]: I1125 12:16:47.787781 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d13f2c3c-3deb-4354-a34b-6aa446583697","Type":"ContainerStarted","Data":"ffeb80d15db409c866bbdcb6760597aea509df5c0274f758839e65a51607467c"} Nov 25 12:16:47 crc kubenswrapper[4688]: I1125 12:16:47.790724 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b4a5cfef-89fa-47bd-9f07-95ab7ced310f","Type":"ContainerStarted","Data":"cb0b26c3395736833df867cf0ec15e84f87321daf1e0cc59ed6c8c4ddd8d2b97"} Nov 25 12:16:47 crc kubenswrapper[4688]: I1125 12:16:47.853870 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:16:47 crc kubenswrapper[4688]: I1125 12:16:47.854197 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.202217 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.364222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lzlt\" (UniqueName: \"kubernetes.io/projected/694823ec-105f-4183-9dfb-8fa7f414c8ac-kube-api-access-4lzlt\") pod \"694823ec-105f-4183-9dfb-8fa7f414c8ac\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.364274 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/694823ec-105f-4183-9dfb-8fa7f414c8ac-secret-volume\") pod \"694823ec-105f-4183-9dfb-8fa7f414c8ac\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.364386 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/694823ec-105f-4183-9dfb-8fa7f414c8ac-config-volume\") pod \"694823ec-105f-4183-9dfb-8fa7f414c8ac\" (UID: \"694823ec-105f-4183-9dfb-8fa7f414c8ac\") " Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.365046 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694823ec-105f-4183-9dfb-8fa7f414c8ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "694823ec-105f-4183-9dfb-8fa7f414c8ac" (UID: "694823ec-105f-4183-9dfb-8fa7f414c8ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.370322 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694823ec-105f-4183-9dfb-8fa7f414c8ac-kube-api-access-4lzlt" (OuterVolumeSpecName: "kube-api-access-4lzlt") pod "694823ec-105f-4183-9dfb-8fa7f414c8ac" (UID: "694823ec-105f-4183-9dfb-8fa7f414c8ac"). InnerVolumeSpecName "kube-api-access-4lzlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.377483 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694823ec-105f-4183-9dfb-8fa7f414c8ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "694823ec-105f-4183-9dfb-8fa7f414c8ac" (UID: "694823ec-105f-4183-9dfb-8fa7f414c8ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.466318 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/694823ec-105f-4183-9dfb-8fa7f414c8ac-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.466767 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/694823ec-105f-4183-9dfb-8fa7f414c8ac-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.466819 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lzlt\" (UniqueName: \"kubernetes.io/projected/694823ec-105f-4183-9dfb-8fa7f414c8ac-kube-api-access-4lzlt\") on node \"crc\" DevicePath \"\"" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.832288 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" event={"ID":"694823ec-105f-4183-9dfb-8fa7f414c8ac","Type":"ContainerDied","Data":"e37d7e4f73e6802b1ebed4e9cae0e628afbf6cf3b9eeacdce437b1f3ae34d86d"} Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.832334 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e37d7e4f73e6802b1ebed4e9cae0e628afbf6cf3b9eeacdce437b1f3ae34d86d" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.832448 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6" Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.838413 4688 generic.go:334] "Generic (PLEG): container finished" podID="b4a5cfef-89fa-47bd-9f07-95ab7ced310f" containerID="cb0b26c3395736833df867cf0ec15e84f87321daf1e0cc59ed6c8c4ddd8d2b97" exitCode=0 Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.838496 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b4a5cfef-89fa-47bd-9f07-95ab7ced310f","Type":"ContainerDied","Data":"cb0b26c3395736833df867cf0ec15e84f87321daf1e0cc59ed6c8c4ddd8d2b97"} Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.848886 4688 generic.go:334] "Generic (PLEG): container finished" podID="d13f2c3c-3deb-4354-a34b-6aa446583697" containerID="ffeb80d15db409c866bbdcb6760597aea509df5c0274f758839e65a51607467c" exitCode=0 Nov 25 12:16:48 crc kubenswrapper[4688]: I1125 12:16:48.848936 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d13f2c3c-3deb-4354-a34b-6aa446583697","Type":"ContainerDied","Data":"ffeb80d15db409c866bbdcb6760597aea509df5c0274f758839e65a51607467c"} Nov 25 12:16:51 crc kubenswrapper[4688]: E1125 12:16:51.900682 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3971fa_9838_436e_97b1_be050abea83a.slice/crio-conmon-b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:16:53 crc kubenswrapper[4688]: I1125 12:16:53.519532 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-d9t24 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 12:16:53 crc kubenswrapper[4688]: I1125 12:16:53.519902 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-d9t24" podUID="8c1c5541-855e-4672-9bfe-080fdd2a42f1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 12:16:53 crc kubenswrapper[4688]: I1125 12:16:53.519565 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-d9t24 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 12:16:53 crc kubenswrapper[4688]: I1125 12:16:53.520025 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d9t24" podUID="8c1c5541-855e-4672-9bfe-080fdd2a42f1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 12:16:53 crc kubenswrapper[4688]: I1125 12:16:53.601217 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:53 crc kubenswrapper[4688]: I1125 12:16:53.605260 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.320048 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.483366 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13f2c3c-3deb-4354-a34b-6aa446583697-kubelet-dir\") pod \"d13f2c3c-3deb-4354-a34b-6aa446583697\" (UID: \"d13f2c3c-3deb-4354-a34b-6aa446583697\") " Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.483438 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13f2c3c-3deb-4354-a34b-6aa446583697-kube-api-access\") pod \"d13f2c3c-3deb-4354-a34b-6aa446583697\" (UID: \"d13f2c3c-3deb-4354-a34b-6aa446583697\") " Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.483438 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d13f2c3c-3deb-4354-a34b-6aa446583697-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d13f2c3c-3deb-4354-a34b-6aa446583697" (UID: "d13f2c3c-3deb-4354-a34b-6aa446583697"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.483727 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d13f2c3c-3deb-4354-a34b-6aa446583697-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.490199 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13f2c3c-3deb-4354-a34b-6aa446583697-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d13f2c3c-3deb-4354-a34b-6aa446583697" (UID: "d13f2c3c-3deb-4354-a34b-6aa446583697"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.584903 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d13f2c3c-3deb-4354-a34b-6aa446583697-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.830791 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.940778 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.940778 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b4a5cfef-89fa-47bd-9f07-95ab7ced310f","Type":"ContainerDied","Data":"88198c18122ba4b73d9c310983ff774189e12ff8bf7e287fbe2a307339acecbf"} Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.941105 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88198c18122ba4b73d9c310983ff774189e12ff8bf7e287fbe2a307339acecbf" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.947074 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d13f2c3c-3deb-4354-a34b-6aa446583697","Type":"ContainerDied","Data":"1461aa9d76390c2b810b2c698529b073b82e064ec001206c788ffa964cbdd09b"} Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.947107 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.947110 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1461aa9d76390c2b810b2c698529b073b82e064ec001206c788ffa964cbdd09b" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.990955 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kube-api-access\") pod \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\" (UID: \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\") " Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.991060 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kubelet-dir\") pod \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\" (UID: \"b4a5cfef-89fa-47bd-9f07-95ab7ced310f\") " Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.991192 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b4a5cfef-89fa-47bd-9f07-95ab7ced310f" (UID: "b4a5cfef-89fa-47bd-9f07-95ab7ced310f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:16:56 crc kubenswrapper[4688]: I1125 12:16:56.991336 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:16:59 crc kubenswrapper[4688]: I1125 12:16:59.421076 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:59 crc kubenswrapper[4688]: I1125 12:16:59.424393 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45273ea2-4a52-4191-a40a-4b4d3b1a12dd-metrics-certs\") pod \"network-metrics-daemon-xbqw8\" (UID: \"45273ea2-4a52-4191-a40a-4b4d3b1a12dd\") " pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:16:59 crc kubenswrapper[4688]: I1125 12:16:59.480790 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbqw8" Nov 25 12:17:02 crc kubenswrapper[4688]: E1125 12:17:02.054365 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3971fa_9838_436e_97b1_be050abea83a.slice/crio-conmon-b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:17:02 crc kubenswrapper[4688]: I1125 12:17:02.343874 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:17:03 crc kubenswrapper[4688]: I1125 12:17:03.528010 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-d9t24" Nov 25 12:17:11 crc kubenswrapper[4688]: I1125 12:17:11.061931 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 12:17:13 crc kubenswrapper[4688]: I1125 12:17:13.914940 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b4a5cfef-89fa-47bd-9f07-95ab7ced310f" (UID: "b4a5cfef-89fa-47bd-9f07-95ab7ced310f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:17:13 crc kubenswrapper[4688]: I1125 12:17:13.981453 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4a5cfef-89fa-47bd-9f07-95ab7ced310f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:14 crc kubenswrapper[4688]: E1125 12:17:14.143369 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 12:17:14 crc kubenswrapper[4688]: E1125 12:17:14.143627 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rchkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6g9dv_openshift-marketplace(52724116-f0b6-48c0-9de2-a6dc6ba73524): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 12:17:14 crc kubenswrapper[4688]: E1125 12:17:14.144786 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6g9dv" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" Nov 25 12:17:15 crc kubenswrapper[4688]: I1125 12:17:15.253816 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-98k7h" Nov 25 12:17:17 crc kubenswrapper[4688]: I1125 12:17:17.854499 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:17:17 crc kubenswrapper[4688]: I1125 12:17:17.854996 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:17:19 crc kubenswrapper[4688]: E1125 12:17:19.817340 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 12:17:19 crc kubenswrapper[4688]: E1125 12:17:19.817681 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82csw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ps6n8_openshift-marketplace(ed58cdcf-2778-4c9f-8ef5-915035ad0800): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 12:17:19 crc kubenswrapper[4688]: E1125 12:17:19.819326 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ps6n8" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" Nov 25 12:17:21 crc kubenswrapper[4688]: E1125 12:17:21.396090 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 12:17:21 crc kubenswrapper[4688]: E1125 12:17:21.396330 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7xdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dvn5d_openshift-marketplace(b022517b-f6e5-412c-b408-905938e25bbd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 12:17:21 crc kubenswrapper[4688]: E1125 12:17:21.397770 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dvn5d" podUID="b022517b-f6e5-412c-b408-905938e25bbd" Nov 25 12:17:22 crc kubenswrapper[4688]: E1125 12:17:22.064917 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ps6n8" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" Nov 25 12:17:22 crc kubenswrapper[4688]: E1125 12:17:22.065054 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6g9dv" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" Nov 25 12:17:22 crc kubenswrapper[4688]: E1125 12:17:22.144130 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 12:17:22 crc kubenswrapper[4688]: E1125 12:17:22.144316 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4rps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-62cmt_openshift-marketplace(b2fdd23a-1f19-466c-bcc4-c75a69ed63f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 12:17:22 crc kubenswrapper[4688]: E1125 12:17:22.145667 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-62cmt" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.135596 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-62cmt" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.318782 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.319609 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lr65f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lcrsk_openshift-marketplace(5e868b43-61d8-4178-9d1a-74ef463cc241): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.320893 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lcrsk" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.327100 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.327267 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pn2ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w2glp_openshift-marketplace(c57b3e0f-17f5-42a4-bc38-40f6d101aecf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.329206 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w2glp" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.375068 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.375228 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrhj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8kmsq_openshift-marketplace(07268d96-608c-48f2-8ffc-d30182070c75): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.376802 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8kmsq" podUID="07268d96-608c-48f2-8ffc-d30182070c75" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.409720 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.409864 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzk4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6gbcb_openshift-marketplace(d17b935e-550e-4a26-8974-0d8c70f0657f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 12:17:24 crc kubenswrapper[4688]: E1125 12:17:24.411065 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6gbcb" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" Nov 25 12:17:24 crc kubenswrapper[4688]: I1125 12:17:24.588952 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xbqw8"] Nov 25 12:17:24 crc kubenswrapper[4688]: W1125 12:17:24.597791 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45273ea2_4a52_4191_a40a_4b4d3b1a12dd.slice/crio-b2c6128ce3d7163dcb0e34ce7b7557ac732a441a52f47fbc78f0717125fa9a0d WatchSource:0}: Error finding container b2c6128ce3d7163dcb0e34ce7b7557ac732a441a52f47fbc78f0717125fa9a0d: Status 404 returned error can't find the container with id b2c6128ce3d7163dcb0e34ce7b7557ac732a441a52f47fbc78f0717125fa9a0d Nov 25 12:17:25 crc kubenswrapper[4688]: I1125 12:17:25.120209 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" event={"ID":"45273ea2-4a52-4191-a40a-4b4d3b1a12dd","Type":"ContainerStarted","Data":"16970d4422c5bb1ed518422d237c5a311e68dcd04da3fa0bf982559764615e74"} Nov 25 12:17:25 crc kubenswrapper[4688]: I1125 12:17:25.120271 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" event={"ID":"45273ea2-4a52-4191-a40a-4b4d3b1a12dd","Type":"ContainerStarted","Data":"b2c6128ce3d7163dcb0e34ce7b7557ac732a441a52f47fbc78f0717125fa9a0d"} Nov 25 12:17:25 crc kubenswrapper[4688]: E1125 12:17:25.122266 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8kmsq" podUID="07268d96-608c-48f2-8ffc-d30182070c75" Nov 25 12:17:25 crc kubenswrapper[4688]: E1125 12:17:25.122306 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-w2glp" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" Nov 25 12:17:25 crc kubenswrapper[4688]: E1125 12:17:25.123393 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lcrsk" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" Nov 25 12:17:25 crc kubenswrapper[4688]: E1125 12:17:25.124198 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6gbcb" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" Nov 25 12:17:26 crc kubenswrapper[4688]: I1125 12:17:26.128661 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbqw8" event={"ID":"45273ea2-4a52-4191-a40a-4b4d3b1a12dd","Type":"ContainerStarted","Data":"9bc6d6868c0360aacbaa22508c18dfff7f16fda37b94e78d5d2ff3cf539874f3"} Nov 25 12:17:26 crc kubenswrapper[4688]: I1125 12:17:26.154672 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xbqw8" podStartSLOduration=170.154642644 podStartE2EDuration="2m50.154642644s" podCreationTimestamp="2025-11-25 12:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:17:26.149952198 +0000 UTC m=+196.259581086" watchObservedRunningTime="2025-11-25 12:17:26.154642644 +0000 UTC m=+196.264271552" Nov 25 12:17:33 crc kubenswrapper[4688]: I1125 12:17:33.172900 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvn5d" event={"ID":"b022517b-f6e5-412c-b408-905938e25bbd","Type":"ContainerStarted","Data":"ddb825e571ebaef762592f2049452c59e643cb215801c25bb4c1dffbb858f6e0"} Nov 25 12:17:34 crc kubenswrapper[4688]: I1125 12:17:34.178402 4688 generic.go:334] "Generic (PLEG): container finished" podID="b022517b-f6e5-412c-b408-905938e25bbd" containerID="ddb825e571ebaef762592f2049452c59e643cb215801c25bb4c1dffbb858f6e0" exitCode=0 Nov 25 12:17:34 crc kubenswrapper[4688]: I1125 12:17:34.178766 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvn5d" event={"ID":"b022517b-f6e5-412c-b408-905938e25bbd","Type":"ContainerDied","Data":"ddb825e571ebaef762592f2049452c59e643cb215801c25bb4c1dffbb858f6e0"} Nov 25 12:17:35 crc kubenswrapper[4688]: I1125 12:17:35.185737 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvn5d" event={"ID":"b022517b-f6e5-412c-b408-905938e25bbd","Type":"ContainerStarted","Data":"28ad4b8da71fb36b5e017292e2f3279e2126a9dc3bcb2e711bc4b18507feb36a"} Nov 25 12:17:35 crc kubenswrapper[4688]: I1125 12:17:35.203629 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dvn5d" podStartSLOduration=4.397306491 podStartE2EDuration="51.203610835s" podCreationTimestamp="2025-11-25 12:16:44 +0000 UTC" firstStartedPulling="2025-11-25 12:16:47.794872122 +0000 UTC m=+157.904500990" lastFinishedPulling="2025-11-25 12:17:34.601176476 +0000 UTC m=+204.710805334" observedRunningTime="2025-11-25 12:17:35.20190823 +0000 UTC m=+205.311537108" watchObservedRunningTime="2025-11-25 12:17:35.203610835 +0000 UTC m=+205.313239703" Nov 25 12:17:36 crc kubenswrapper[4688]: I1125 12:17:36.194665 4688 generic.go:334] "Generic (PLEG): container finished" podID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerID="04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e" exitCode=0 Nov 25 12:17:36 crc kubenswrapper[4688]: I1125 12:17:36.194723 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps6n8" event={"ID":"ed58cdcf-2778-4c9f-8ef5-915035ad0800","Type":"ContainerDied","Data":"04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e"} Nov 25 12:17:37 crc kubenswrapper[4688]: I1125 12:17:37.200742 4688 generic.go:334] "Generic (PLEG): container finished" podID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerID="13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08" exitCode=0 Nov 25 12:17:37 crc kubenswrapper[4688]: I1125 12:17:37.200838 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g9dv" event={"ID":"52724116-f0b6-48c0-9de2-a6dc6ba73524","Type":"ContainerDied","Data":"13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08"} Nov 25 12:17:37 crc kubenswrapper[4688]: I1125 12:17:37.205066 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps6n8" event={"ID":"ed58cdcf-2778-4c9f-8ef5-915035ad0800","Type":"ContainerStarted","Data":"2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf"} Nov 25 12:17:37 crc kubenswrapper[4688]: I1125 12:17:37.239210 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ps6n8" podStartSLOduration=2.286887154 podStartE2EDuration="53.23917843s" podCreationTimestamp="2025-11-25 12:16:44 +0000 UTC" firstStartedPulling="2025-11-25 12:16:45.687969309 +0000 UTC m=+155.797598177" lastFinishedPulling="2025-11-25 12:17:36.640260585 +0000 UTC m=+206.749889453" observedRunningTime="2025-11-25 12:17:37.237988348 +0000 UTC m=+207.347617226" watchObservedRunningTime="2025-11-25 12:17:37.23917843 +0000 UTC m=+207.348807308" Nov 25 12:17:38 crc kubenswrapper[4688]: I1125 12:17:38.216254 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g9dv" event={"ID":"52724116-f0b6-48c0-9de2-a6dc6ba73524","Type":"ContainerStarted","Data":"f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9"} Nov 25 12:17:38 crc kubenswrapper[4688]: I1125 12:17:38.252147 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6g9dv" podStartSLOduration=2.222795683 podStartE2EDuration="57.252124723s" podCreationTimestamp="2025-11-25 12:16:41 +0000 UTC" firstStartedPulling="2025-11-25 12:16:42.549446094 +0000 UTC m=+152.659074962" lastFinishedPulling="2025-11-25 12:17:37.578775134 +0000 UTC m=+207.688404002" observedRunningTime="2025-11-25 12:17:38.248037303 +0000 UTC m=+208.357666171" watchObservedRunningTime="2025-11-25 12:17:38.252124723 +0000 UTC m=+208.361753591" Nov 25 12:17:39 crc kubenswrapper[4688]: I1125 12:17:39.222249 4688 generic.go:334] "Generic (PLEG): container finished" podID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerID="6f8e620d1c1fab963bddac72ebae43efbbbc73106809da4853622d1ae5be61e6" exitCode=0 Nov 25 12:17:39 crc kubenswrapper[4688]: I1125 12:17:39.222316 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62cmt" event={"ID":"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0","Type":"ContainerDied","Data":"6f8e620d1c1fab963bddac72ebae43efbbbc73106809da4853622d1ae5be61e6"} Nov 25 12:17:39 crc kubenswrapper[4688]: I1125 12:17:39.224914 4688 generic.go:334] "Generic (PLEG): container finished" podID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerID="6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5" exitCode=0 Nov 25 12:17:39 crc kubenswrapper[4688]: I1125 12:17:39.224971 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbcb" event={"ID":"d17b935e-550e-4a26-8974-0d8c70f0657f","Type":"ContainerDied","Data":"6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5"} Nov 25 12:17:40 crc kubenswrapper[4688]: I1125 12:17:40.233998 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62cmt" event={"ID":"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0","Type":"ContainerStarted","Data":"72d18d0ab468106bdefcc33afc6541aaae98667227b93569aa89bfa638030ca7"} Nov 25 12:17:40 crc kubenswrapper[4688]: I1125 12:17:40.236031 4688 generic.go:334] "Generic (PLEG): container finished" podID="07268d96-608c-48f2-8ffc-d30182070c75" containerID="1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44" exitCode=0 Nov 25 12:17:40 crc kubenswrapper[4688]: I1125 12:17:40.236113 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmsq" event={"ID":"07268d96-608c-48f2-8ffc-d30182070c75","Type":"ContainerDied","Data":"1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44"} Nov 25 12:17:40 crc kubenswrapper[4688]: I1125 12:17:40.243548 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbcb" event={"ID":"d17b935e-550e-4a26-8974-0d8c70f0657f","Type":"ContainerStarted","Data":"67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4"} Nov 25 12:17:40 crc kubenswrapper[4688]: I1125 12:17:40.285482 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6gbcb" podStartSLOduration=2.24892144 podStartE2EDuration="57.285467308s" podCreationTimestamp="2025-11-25 12:16:43 +0000 UTC" firstStartedPulling="2025-11-25 12:16:44.609173455 +0000 UTC m=+154.718802323" lastFinishedPulling="2025-11-25 12:17:39.645719323 +0000 UTC m=+209.755348191" observedRunningTime="2025-11-25 12:17:40.280773931 +0000 UTC m=+210.390402799" watchObservedRunningTime="2025-11-25 12:17:40.285467308 +0000 UTC m=+210.395096176" Nov 25 12:17:40 crc kubenswrapper[4688]: I1125 12:17:40.287618 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-62cmt" podStartSLOduration=2.273940561 podStartE2EDuration="57.287612125s" podCreationTimestamp="2025-11-25 12:16:43 +0000 UTC" firstStartedPulling="2025-11-25 12:16:44.610821678 +0000 UTC m=+154.720450546" lastFinishedPulling="2025-11-25 12:17:39.624493242 +0000 UTC m=+209.734122110" observedRunningTime="2025-11-25 12:17:40.258869962 +0000 UTC m=+210.368498830" watchObservedRunningTime="2025-11-25 12:17:40.287612125 +0000 UTC m=+210.397240993" Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.251305 4688 generic.go:334] "Generic (PLEG): container finished" podID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerID="b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed" exitCode=0 Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.251405 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2glp" event={"ID":"c57b3e0f-17f5-42a4-bc38-40f6d101aecf","Type":"ContainerDied","Data":"b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed"} Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.255088 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmsq" event={"ID":"07268d96-608c-48f2-8ffc-d30182070c75","Type":"ContainerStarted","Data":"21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8"} Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.260105 4688 generic.go:334] "Generic (PLEG): container finished" podID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerID="5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc" exitCode=0 Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.260158 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcrsk" event={"ID":"5e868b43-61d8-4178-9d1a-74ef463cc241","Type":"ContainerDied","Data":"5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc"} Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.330102 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kmsq" podStartSLOduration=3.305243461 podStartE2EDuration="1m0.330073813s" podCreationTimestamp="2025-11-25 12:16:41 +0000 UTC" firstStartedPulling="2025-11-25 12:16:43.596428927 +0000 UTC m=+153.706057795" lastFinishedPulling="2025-11-25 12:17:40.621259279 +0000 UTC m=+210.730888147" observedRunningTime="2025-11-25 12:17:41.327309218 +0000 UTC m=+211.436938106" watchObservedRunningTime="2025-11-25 12:17:41.330073813 +0000 UTC m=+211.439702681" Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.721280 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.721345 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:17:41 crc kubenswrapper[4688]: I1125 12:17:41.897626 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:17:42 crc kubenswrapper[4688]: I1125 12:17:42.122607 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:17:42 crc kubenswrapper[4688]: I1125 12:17:42.122979 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:17:42 crc kubenswrapper[4688]: I1125 12:17:42.271711 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcrsk" event={"ID":"5e868b43-61d8-4178-9d1a-74ef463cc241","Type":"ContainerStarted","Data":"85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df"} Nov 25 12:17:42 crc kubenswrapper[4688]: I1125 12:17:42.275145 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2glp" event={"ID":"c57b3e0f-17f5-42a4-bc38-40f6d101aecf","Type":"ContainerStarted","Data":"66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac"} Nov 25 12:17:42 crc kubenswrapper[4688]: I1125 12:17:42.324969 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:17:42 crc kubenswrapper[4688]: I1125 12:17:42.342624 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lcrsk" podStartSLOduration=2.157233415 podStartE2EDuration="1m1.342598443s" podCreationTimestamp="2025-11-25 12:16:41 +0000 UTC" firstStartedPulling="2025-11-25 12:16:42.528263941 +0000 UTC m=+152.637892809" lastFinishedPulling="2025-11-25 12:17:41.713628969 +0000 UTC m=+211.823257837" observedRunningTime="2025-11-25 12:17:42.296905043 +0000 UTC m=+212.406533911" watchObservedRunningTime="2025-11-25 12:17:42.342598443 +0000 UTC m=+212.452227311" Nov 25 12:17:43 crc kubenswrapper[4688]: I1125 12:17:43.174553 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8kmsq" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="registry-server" probeResult="failure" output=< Nov 25 12:17:43 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Nov 25 12:17:43 crc kubenswrapper[4688]: > Nov 25 12:17:43 crc kubenswrapper[4688]: I1125 12:17:43.301283 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2glp" podStartSLOduration=3.053575103 podStartE2EDuration="1m2.301255144s" podCreationTimestamp="2025-11-25 12:16:41 +0000 UTC" firstStartedPulling="2025-11-25 12:16:42.545663155 +0000 UTC m=+152.655292023" lastFinishedPulling="2025-11-25 12:17:41.793343196 +0000 UTC m=+211.902972064" observedRunningTime="2025-11-25 12:17:43.297745249 +0000 UTC m=+213.407374117" watchObservedRunningTime="2025-11-25 12:17:43.301255144 +0000 UTC m=+213.410884012" Nov 25 12:17:43 crc kubenswrapper[4688]: I1125 12:17:43.470652 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:17:43 crc kubenswrapper[4688]: I1125 12:17:43.470753 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:17:43 crc kubenswrapper[4688]: I1125 12:17:43.518919 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:17:43 crc kubenswrapper[4688]: I1125 12:17:43.947556 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:17:43 crc kubenswrapper[4688]: I1125 12:17:43.947913 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:17:43 crc kubenswrapper[4688]: I1125 12:17:43.987849 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:17:44 crc kubenswrapper[4688]: I1125 12:17:44.327844 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:17:44 crc kubenswrapper[4688]: I1125 12:17:44.345485 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:17:44 crc kubenswrapper[4688]: I1125 12:17:44.676173 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:17:44 crc kubenswrapper[4688]: I1125 12:17:44.683950 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:17:44 crc kubenswrapper[4688]: I1125 12:17:44.728116 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:17:45 crc kubenswrapper[4688]: I1125 12:17:45.029506 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62cmt"] Nov 25 12:17:45 crc kubenswrapper[4688]: I1125 12:17:45.079722 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:17:45 crc kubenswrapper[4688]: I1125 12:17:45.080409 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:17:45 crc kubenswrapper[4688]: I1125 12:17:45.114455 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:17:45 crc kubenswrapper[4688]: I1125 12:17:45.331091 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:17:45 crc kubenswrapper[4688]: I1125 12:17:45.335811 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:17:46 crc kubenswrapper[4688]: I1125 12:17:46.296593 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-62cmt" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerName="registry-server" containerID="cri-o://72d18d0ab468106bdefcc33afc6541aaae98667227b93569aa89bfa638030ca7" gracePeriod=2 Nov 25 12:17:47 crc kubenswrapper[4688]: I1125 12:17:47.306477 4688 generic.go:334] "Generic (PLEG): container finished" podID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerID="72d18d0ab468106bdefcc33afc6541aaae98667227b93569aa89bfa638030ca7" exitCode=0 Nov 25 12:17:47 crc kubenswrapper[4688]: I1125 12:17:47.306560 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62cmt" event={"ID":"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0","Type":"ContainerDied","Data":"72d18d0ab468106bdefcc33afc6541aaae98667227b93569aa89bfa638030ca7"} Nov 25 12:17:47 crc kubenswrapper[4688]: I1125 12:17:47.854089 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:17:47 crc kubenswrapper[4688]: I1125 12:17:47.854159 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:17:47 crc kubenswrapper[4688]: I1125 12:17:47.854212 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:17:47 crc kubenswrapper[4688]: I1125 12:17:47.854963 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:17:47 crc kubenswrapper[4688]: I1125 12:17:47.855107 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79" gracePeriod=600 Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.019299 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvn5d"] Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.313093 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dvn5d" podUID="b022517b-f6e5-412c-b408-905938e25bbd" containerName="registry-server" containerID="cri-o://28ad4b8da71fb36b5e017292e2f3279e2126a9dc3bcb2e711bc4b18507feb36a" gracePeriod=2 Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.496480 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.639509 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-catalog-content\") pod \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.639574 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4rps\" (UniqueName: \"kubernetes.io/projected/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-kube-api-access-l4rps\") pod \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.639643 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-utilities\") pod \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\" (UID: \"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0\") " Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.640690 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-utilities" (OuterVolumeSpecName: "utilities") pod "b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" (UID: "b2fdd23a-1f19-466c-bcc4-c75a69ed63f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.645225 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-kube-api-access-l4rps" (OuterVolumeSpecName: "kube-api-access-l4rps") pod "b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" (UID: "b2fdd23a-1f19-466c-bcc4-c75a69ed63f0"). InnerVolumeSpecName "kube-api-access-l4rps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.664299 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" (UID: "b2fdd23a-1f19-466c-bcc4-c75a69ed63f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.741198 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.741233 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4rps\" (UniqueName: \"kubernetes.io/projected/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-kube-api-access-l4rps\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:48 crc kubenswrapper[4688]: I1125 12:17:48.741247 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.321951 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62cmt" Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.321922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62cmt" event={"ID":"b2fdd23a-1f19-466c-bcc4-c75a69ed63f0","Type":"ContainerDied","Data":"d455e2711722786c1ebf32a25e24cbdd3d3cf8c3763ef1eb4e0e241fd5958554"} Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.322111 4688 scope.go:117] "RemoveContainer" containerID="72d18d0ab468106bdefcc33afc6541aaae98667227b93569aa89bfa638030ca7" Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.327628 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79" exitCode=0 Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.327827 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79"} Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.346594 4688 scope.go:117] "RemoveContainer" containerID="6f8e620d1c1fab963bddac72ebae43efbbbc73106809da4853622d1ae5be61e6" Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.347472 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62cmt"] Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.351895 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-62cmt"] Nov 25 12:17:49 crc kubenswrapper[4688]: I1125 12:17:49.377367 4688 scope.go:117] "RemoveContainer" containerID="1d8542f2d6345f930ae16de028f69a3649d4b2eae879154d1c52365d8b2177c5" Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.336294 4688 generic.go:334] "Generic (PLEG): container finished" podID="b022517b-f6e5-412c-b408-905938e25bbd" containerID="28ad4b8da71fb36b5e017292e2f3279e2126a9dc3bcb2e711bc4b18507feb36a" exitCode=0 Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.336353 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvn5d" event={"ID":"b022517b-f6e5-412c-b408-905938e25bbd","Type":"ContainerDied","Data":"28ad4b8da71fb36b5e017292e2f3279e2126a9dc3bcb2e711bc4b18507feb36a"} Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.339809 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"819b0f817aeb0d3804bd7ffe9f7fb12d89359e0a946dcb6d57d85ef5b466a5d9"} Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.549928 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.569125 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-utilities\") pod \"b022517b-f6e5-412c-b408-905938e25bbd\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.569311 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7xdt\" (UniqueName: \"kubernetes.io/projected/b022517b-f6e5-412c-b408-905938e25bbd-kube-api-access-j7xdt\") pod \"b022517b-f6e5-412c-b408-905938e25bbd\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.570897 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-utilities" (OuterVolumeSpecName: "utilities") pod "b022517b-f6e5-412c-b408-905938e25bbd" (UID: "b022517b-f6e5-412c-b408-905938e25bbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.571701 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-catalog-content\") pod \"b022517b-f6e5-412c-b408-905938e25bbd\" (UID: \"b022517b-f6e5-412c-b408-905938e25bbd\") " Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.572345 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.610819 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b022517b-f6e5-412c-b408-905938e25bbd-kube-api-access-j7xdt" (OuterVolumeSpecName: "kube-api-access-j7xdt") pod "b022517b-f6e5-412c-b408-905938e25bbd" (UID: "b022517b-f6e5-412c-b408-905938e25bbd"). InnerVolumeSpecName "kube-api-access-j7xdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.665408 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b022517b-f6e5-412c-b408-905938e25bbd" (UID: "b022517b-f6e5-412c-b408-905938e25bbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.672951 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7xdt\" (UniqueName: \"kubernetes.io/projected/b022517b-f6e5-412c-b408-905938e25bbd-kube-api-access-j7xdt\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.672980 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b022517b-f6e5-412c-b408-905938e25bbd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:50 crc kubenswrapper[4688]: I1125 12:17:50.748821 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" path="/var/lib/kubelet/pods/b2fdd23a-1f19-466c-bcc4-c75a69ed63f0/volumes" Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.350906 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvn5d" event={"ID":"b022517b-f6e5-412c-b408-905938e25bbd","Type":"ContainerDied","Data":"d6137753ea5186a0a1a417a91dc7c60534865894472817ba38ab5b6f042ba47c"} Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.350970 4688 scope.go:117] "RemoveContainer" containerID="28ad4b8da71fb36b5e017292e2f3279e2126a9dc3bcb2e711bc4b18507feb36a" Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.350987 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvn5d" Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.380146 4688 scope.go:117] "RemoveContainer" containerID="ddb825e571ebaef762592f2049452c59e643cb215801c25bb4c1dffbb858f6e0" Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.401747 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvn5d"] Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.401801 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dvn5d"] Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.402757 4688 scope.go:117] "RemoveContainer" containerID="c027355f36a5a8ac3a7c0a97d579ec6f2b3a00cfb55bdc48edfd57962250ba99" Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.565198 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.565389 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:17:51 crc kubenswrapper[4688]: I1125 12:17:51.615484 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:17:52 crc kubenswrapper[4688]: I1125 12:17:52.028146 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:17:52 crc kubenswrapper[4688]: I1125 12:17:52.028488 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:17:52 crc kubenswrapper[4688]: I1125 12:17:52.098391 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:17:52 crc kubenswrapper[4688]: I1125 12:17:52.168705 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:17:52 crc kubenswrapper[4688]: I1125 12:17:52.224472 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:17:52 crc kubenswrapper[4688]: I1125 12:17:52.406899 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:17:52 crc kubenswrapper[4688]: I1125 12:17:52.427552 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:17:52 crc kubenswrapper[4688]: I1125 12:17:52.748246 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b022517b-f6e5-412c-b408-905938e25bbd" path="/var/lib/kubelet/pods/b022517b-f6e5-412c-b408-905938e25bbd/volumes" Nov 25 12:17:54 crc kubenswrapper[4688]: I1125 12:17:54.820608 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lcrsk"] Nov 25 12:17:54 crc kubenswrapper[4688]: I1125 12:17:54.821156 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lcrsk" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerName="registry-server" containerID="cri-o://85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df" gracePeriod=2 Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.203358 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.235892 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-catalog-content\") pod \"5e868b43-61d8-4178-9d1a-74ef463cc241\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.235936 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-utilities\") pod \"5e868b43-61d8-4178-9d1a-74ef463cc241\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.235965 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr65f\" (UniqueName: \"kubernetes.io/projected/5e868b43-61d8-4178-9d1a-74ef463cc241-kube-api-access-lr65f\") pod \"5e868b43-61d8-4178-9d1a-74ef463cc241\" (UID: \"5e868b43-61d8-4178-9d1a-74ef463cc241\") " Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.236671 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-utilities" (OuterVolumeSpecName: "utilities") pod "5e868b43-61d8-4178-9d1a-74ef463cc241" (UID: "5e868b43-61d8-4178-9d1a-74ef463cc241"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.250694 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e868b43-61d8-4178-9d1a-74ef463cc241-kube-api-access-lr65f" (OuterVolumeSpecName: "kube-api-access-lr65f") pod "5e868b43-61d8-4178-9d1a-74ef463cc241" (UID: "5e868b43-61d8-4178-9d1a-74ef463cc241"). InnerVolumeSpecName "kube-api-access-lr65f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.296960 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e868b43-61d8-4178-9d1a-74ef463cc241" (UID: "5e868b43-61d8-4178-9d1a-74ef463cc241"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.337255 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.337300 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e868b43-61d8-4178-9d1a-74ef463cc241-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.337313 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr65f\" (UniqueName: \"kubernetes.io/projected/5e868b43-61d8-4178-9d1a-74ef463cc241-kube-api-access-lr65f\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.378201 4688 generic.go:334] "Generic (PLEG): container finished" podID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerID="85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df" exitCode=0 Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.378246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcrsk" event={"ID":"5e868b43-61d8-4178-9d1a-74ef463cc241","Type":"ContainerDied","Data":"85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df"} Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.378272 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcrsk" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.378306 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcrsk" event={"ID":"5e868b43-61d8-4178-9d1a-74ef463cc241","Type":"ContainerDied","Data":"52562d12b82805ab0a4e6e4b36f46f743b772ba494dc7bac997be4479dec5616"} Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.378331 4688 scope.go:117] "RemoveContainer" containerID="85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.398580 4688 scope.go:117] "RemoveContainer" containerID="5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.403034 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lcrsk"] Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.406083 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lcrsk"] Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.428098 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kmsq"] Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.428343 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8kmsq" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="registry-server" containerID="cri-o://21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8" gracePeriod=2 Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.433567 4688 scope.go:117] "RemoveContainer" containerID="eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.450657 4688 scope.go:117] "RemoveContainer" containerID="85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df" Nov 25 12:17:55 crc kubenswrapper[4688]: E1125 12:17:55.451313 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df\": container with ID starting with 85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df not found: ID does not exist" containerID="85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.451345 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df"} err="failed to get container status \"85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df\": rpc error: code = NotFound desc = could not find container \"85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df\": container with ID starting with 85eca831c743b7e2a2c34cdb861a07066d85cb1d916e5f7d32e0623641fbf4df not found: ID does not exist" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.451365 4688 scope.go:117] "RemoveContainer" containerID="5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc" Nov 25 12:17:55 crc kubenswrapper[4688]: E1125 12:17:55.451582 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc\": container with ID starting with 5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc not found: ID does not exist" containerID="5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.451614 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc"} err="failed to get container status \"5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc\": rpc error: code = NotFound desc = could not find container \"5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc\": container with ID starting with 5974dcdc484570de121906b432df4876acc28b1dea767f2ef27822dbddf742fc not found: ID does not exist" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.451631 4688 scope.go:117] "RemoveContainer" containerID="eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc" Nov 25 12:17:55 crc kubenswrapper[4688]: E1125 12:17:55.451853 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc\": container with ID starting with eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc not found: ID does not exist" containerID="eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.451873 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc"} err="failed to get container status \"eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc\": rpc error: code = NotFound desc = could not find container \"eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc\": container with ID starting with eb7fd25157123cfa9b919e809261f05e76ced158a7eb9577d976ae6ac5300ecc not found: ID does not exist" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.750269 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.843944 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-utilities\") pod \"07268d96-608c-48f2-8ffc-d30182070c75\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.844138 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrhj6\" (UniqueName: \"kubernetes.io/projected/07268d96-608c-48f2-8ffc-d30182070c75-kube-api-access-vrhj6\") pod \"07268d96-608c-48f2-8ffc-d30182070c75\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.844168 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-catalog-content\") pod \"07268d96-608c-48f2-8ffc-d30182070c75\" (UID: \"07268d96-608c-48f2-8ffc-d30182070c75\") " Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.844661 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-utilities" (OuterVolumeSpecName: "utilities") pod "07268d96-608c-48f2-8ffc-d30182070c75" (UID: "07268d96-608c-48f2-8ffc-d30182070c75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.850676 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07268d96-608c-48f2-8ffc-d30182070c75-kube-api-access-vrhj6" (OuterVolumeSpecName: "kube-api-access-vrhj6") pod "07268d96-608c-48f2-8ffc-d30182070c75" (UID: "07268d96-608c-48f2-8ffc-d30182070c75"). InnerVolumeSpecName "kube-api-access-vrhj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.891817 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07268d96-608c-48f2-8ffc-d30182070c75" (UID: "07268d96-608c-48f2-8ffc-d30182070c75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.946010 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrhj6\" (UniqueName: \"kubernetes.io/projected/07268d96-608c-48f2-8ffc-d30182070c75-kube-api-access-vrhj6\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.946043 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:55 crc kubenswrapper[4688]: I1125 12:17:55.946054 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07268d96-608c-48f2-8ffc-d30182070c75-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.387085 4688 generic.go:334] "Generic (PLEG): container finished" podID="07268d96-608c-48f2-8ffc-d30182070c75" containerID="21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8" exitCode=0 Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.387185 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kmsq" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.387184 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmsq" event={"ID":"07268d96-608c-48f2-8ffc-d30182070c75","Type":"ContainerDied","Data":"21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8"} Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.387940 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmsq" event={"ID":"07268d96-608c-48f2-8ffc-d30182070c75","Type":"ContainerDied","Data":"3f8983a47166b6b821e425917cf67205fb73d6019c1812de15a2884082b7eab9"} Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.388001 4688 scope.go:117] "RemoveContainer" containerID="21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.406351 4688 scope.go:117] "RemoveContainer" containerID="1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.418397 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kmsq"] Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.426006 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8kmsq"] Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.439746 4688 scope.go:117] "RemoveContainer" containerID="41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.454982 4688 scope.go:117] "RemoveContainer" containerID="21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8" Nov 25 12:17:56 crc kubenswrapper[4688]: E1125 12:17:56.455400 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8\": container with ID starting with 21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8 not found: ID does not exist" containerID="21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.455602 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8"} err="failed to get container status \"21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8\": rpc error: code = NotFound desc = could not find container \"21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8\": container with ID starting with 21347fe813a6ec02ea4d3971baabf959d019a8bd54d7c06863eb1633bf23a3e8 not found: ID does not exist" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.455727 4688 scope.go:117] "RemoveContainer" containerID="1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44" Nov 25 12:17:56 crc kubenswrapper[4688]: E1125 12:17:56.456232 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44\": container with ID starting with 1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44 not found: ID does not exist" containerID="1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.456263 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44"} err="failed to get container status \"1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44\": rpc error: code = NotFound desc = could not find container \"1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44\": container with ID starting with 1cd72585d3b7ac2e1ab9bd5b5532ca9333015ff5ff84aa17b182c09b84c64c44 not found: ID does not exist" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.456282 4688 scope.go:117] "RemoveContainer" containerID="41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16" Nov 25 12:17:56 crc kubenswrapper[4688]: E1125 12:17:56.456561 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16\": container with ID starting with 41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16 not found: ID does not exist" containerID="41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.456589 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16"} err="failed to get container status \"41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16\": rpc error: code = NotFound desc = could not find container \"41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16\": container with ID starting with 41e79cf4f1a8462f70fe6c237edb75743b3ebb440c979add5468825a94566b16 not found: ID does not exist" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.747869 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07268d96-608c-48f2-8ffc-d30182070c75" path="/var/lib/kubelet/pods/07268d96-608c-48f2-8ffc-d30182070c75/volumes" Nov 25 12:17:56 crc kubenswrapper[4688]: I1125 12:17:56.748695 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" path="/var/lib/kubelet/pods/5e868b43-61d8-4178-9d1a-74ef463cc241/volumes" Nov 25 12:18:03 crc kubenswrapper[4688]: I1125 12:18:03.440547 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6pbkn"] Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.478185 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" podUID="775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" containerName="oauth-openshift" containerID="cri-o://875f41575a8ed64cea2970207bff7e02a1b73631610dea6a324f1570af8288da" gracePeriod=15 Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.700669 4688 generic.go:334] "Generic (PLEG): container finished" podID="775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" containerID="875f41575a8ed64cea2970207bff7e02a1b73631610dea6a324f1570af8288da" exitCode=0 Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.700742 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" event={"ID":"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4","Type":"ContainerDied","Data":"875f41575a8ed64cea2970207bff7e02a1b73631610dea6a324f1570af8288da"} Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.900460 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.931756 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6c5fcdcf5-m674p"] Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.931982 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.931996 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932010 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932018 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932030 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b022517b-f6e5-412c-b408-905938e25bbd" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932037 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b022517b-f6e5-412c-b408-905938e25bbd" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932048 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b022517b-f6e5-412c-b408-905938e25bbd" containerName="extract-content" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932055 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b022517b-f6e5-412c-b408-905938e25bbd" containerName="extract-content" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932063 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="extract-utilities" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932072 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="extract-utilities" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932080 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" containerName="oauth-openshift" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932087 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" containerName="oauth-openshift" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932098 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694823ec-105f-4183-9dfb-8fa7f414c8ac" containerName="collect-profiles" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932105 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="694823ec-105f-4183-9dfb-8fa7f414c8ac" containerName="collect-profiles" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932118 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932125 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932134 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerName="extract-utilities" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932141 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerName="extract-utilities" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932151 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="extract-content" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932158 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="extract-content" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932166 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerName="extract-content" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932173 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerName="extract-content" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932183 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerName="extract-utilities" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932190 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerName="extract-utilities" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932201 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b022517b-f6e5-412c-b408-905938e25bbd" containerName="extract-utilities" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932208 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b022517b-f6e5-412c-b408-905938e25bbd" containerName="extract-utilities" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932216 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerName="extract-content" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932223 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerName="extract-content" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932237 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a5cfef-89fa-47bd-9f07-95ab7ced310f" containerName="pruner" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932244 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a5cfef-89fa-47bd-9f07-95ab7ced310f" containerName="pruner" Nov 25 12:18:28 crc kubenswrapper[4688]: E1125 12:18:28.932255 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13f2c3c-3deb-4354-a34b-6aa446583697" containerName="pruner" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932262 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13f2c3c-3deb-4354-a34b-6aa446583697" containerName="pruner" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932381 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13f2c3c-3deb-4354-a34b-6aa446583697" containerName="pruner" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932392 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fdd23a-1f19-466c-bcc4-c75a69ed63f0" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932405 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e868b43-61d8-4178-9d1a-74ef463cc241" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932414 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a5cfef-89fa-47bd-9f07-95ab7ced310f" containerName="pruner" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932427 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" containerName="oauth-openshift" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932439 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="694823ec-105f-4183-9dfb-8fa7f414c8ac" containerName="collect-profiles" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932447 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b022517b-f6e5-412c-b408-905938e25bbd" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932456 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="07268d96-608c-48f2-8ffc-d30182070c75" containerName="registry-server" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.932906 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.942607 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c5fcdcf5-m674p"] Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976125 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-policies\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976195 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-dir\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976248 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-cliconfig\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976299 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf7j6\" (UniqueName: \"kubernetes.io/projected/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-kube-api-access-wf7j6\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976348 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-serving-cert\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976366 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976448 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-provider-selection\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976489 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-login\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976519 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-router-certs\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976579 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-error\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976619 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-idp-0-file-data\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976670 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-trusted-ca-bundle\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976700 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-session\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976746 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-service-ca\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.976782 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-ocp-branding-template\") pod \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\" (UID: \"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4\") " Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.977015 4688 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.977104 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.977138 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.979904 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.979976 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.982371 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.982653 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-kube-api-access-wf7j6" (OuterVolumeSpecName: "kube-api-access-wf7j6") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "kube-api-access-wf7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.983031 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.983698 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.985894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.986195 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.986888 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.997865 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:28 crc kubenswrapper[4688]: I1125 12:18:28.998166 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" (UID: "775d4f26-1a83-4fe6-b4c7-33f5f586fcd4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.078276 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.078724 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-session\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.078836 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.078865 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-audit-policies\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.078902 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079100 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-error\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079170 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2shd\" (UniqueName: \"kubernetes.io/projected/e4d428f5-7712-4263-85b9-5be943efcefa-kube-api-access-h2shd\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079242 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079281 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079309 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-login\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079331 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e4d428f5-7712-4263-85b9-5be943efcefa-audit-dir\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079403 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079466 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079617 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079637 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079652 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079665 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079680 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079695 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079709 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079753 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079770 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079783 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079795 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079807 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf7j6\" (UniqueName: \"kubernetes.io/projected/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-kube-api-access-wf7j6\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.079820 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.180875 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.180969 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e4d428f5-7712-4263-85b9-5be943efcefa-audit-dir\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181027 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-login\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181081 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181136 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181196 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181232 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e4d428f5-7712-4263-85b9-5be943efcefa-audit-dir\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181244 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-session\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181575 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181615 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-audit-policies\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181673 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181891 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-error\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.181963 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2shd\" (UniqueName: \"kubernetes.io/projected/e4d428f5-7712-4263-85b9-5be943efcefa-kube-api-access-h2shd\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.182021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.182825 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.182879 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.183029 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.184833 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e4d428f5-7712-4263-85b9-5be943efcefa-audit-policies\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.186685 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.187687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-error\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.188333 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.188510 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-login\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.188742 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.189305 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-session\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.190099 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.190429 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e4d428f5-7712-4263-85b9-5be943efcefa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.212910 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2shd\" (UniqueName: \"kubernetes.io/projected/e4d428f5-7712-4263-85b9-5be943efcefa-kube-api-access-h2shd\") pod \"oauth-openshift-6c5fcdcf5-m674p\" (UID: \"e4d428f5-7712-4263-85b9-5be943efcefa\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.259363 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.723574 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" event={"ID":"775d4f26-1a83-4fe6-b4c7-33f5f586fcd4","Type":"ContainerDied","Data":"d4d9361a04bcd32df1a3200e3ff84229b94c003ab61cea2d819a9c3b8174f395"} Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.723767 4688 scope.go:117] "RemoveContainer" containerID="875f41575a8ed64cea2970207bff7e02a1b73631610dea6a324f1570af8288da" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.723808 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6pbkn" Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.741142 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c5fcdcf5-m674p"] Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.781873 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6pbkn"] Nov 25 12:18:29 crc kubenswrapper[4688]: I1125 12:18:29.785229 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6pbkn"] Nov 25 12:18:30 crc kubenswrapper[4688]: I1125 12:18:30.730027 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" event={"ID":"e4d428f5-7712-4263-85b9-5be943efcefa","Type":"ContainerStarted","Data":"74b7b1b341d910c3d0bd09f3b7ce1194cac381148e5f177397f8eabf78f8ffb7"} Nov 25 12:18:30 crc kubenswrapper[4688]: I1125 12:18:30.730678 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:30 crc kubenswrapper[4688]: I1125 12:18:30.730695 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" event={"ID":"e4d428f5-7712-4263-85b9-5be943efcefa","Type":"ContainerStarted","Data":"682b57f4587e58fa496f8502726357d6cabe66e571ea008e1ec272aa1668d547"} Nov 25 12:18:30 crc kubenswrapper[4688]: I1125 12:18:30.738247 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" Nov 25 12:18:30 crc kubenswrapper[4688]: I1125 12:18:30.747670 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775d4f26-1a83-4fe6-b4c7-33f5f586fcd4" path="/var/lib/kubelet/pods/775d4f26-1a83-4fe6-b4c7-33f5f586fcd4/volumes" Nov 25 12:18:30 crc kubenswrapper[4688]: I1125 12:18:30.766040 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-m674p" podStartSLOduration=27.766012668 podStartE2EDuration="27.766012668s" podCreationTimestamp="2025-11-25 12:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:18:30.762235846 +0000 UTC m=+260.871864714" watchObservedRunningTime="2025-11-25 12:18:30.766012668 +0000 UTC m=+260.875641536" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.095651 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6g9dv"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.096593 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6g9dv" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerName="registry-server" containerID="cri-o://f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9" gracePeriod=30 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.105142 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2glp"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.105324 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2glp" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerName="registry-server" containerID="cri-o://66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac" gracePeriod=30 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.121653 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5gsx"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.121904 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" podUID="b16521c9-4940-4ab4-acda-cec2b56f285e" containerName="marketplace-operator" containerID="cri-o://b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c" gracePeriod=30 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.130390 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbcb"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.130693 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6gbcb" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="registry-server" containerID="cri-o://67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4" gracePeriod=30 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.143824 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ps6lt"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.144880 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.147088 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ps6n8"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.147444 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ps6n8" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerName="registry-server" containerID="cri-o://2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf" gracePeriod=30 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.166819 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ps6lt"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.286028 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr57h\" (UniqueName: \"kubernetes.io/projected/499bbc68-a6dd-4670-acef-2dfcce904fc3-kube-api-access-dr57h\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.286100 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/499bbc68-a6dd-4670-acef-2dfcce904fc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.286156 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/499bbc68-a6dd-4670-acef-2dfcce904fc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.385963 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded58cdcf_2778_4c9f_8ef5_915035ad0800.slice/crio-conmon-2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded58cdcf_2778_4c9f_8ef5_915035ad0800.slice/crio-2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd17b935e_550e_4a26_8974_0d8c70f0657f.slice/crio-conmon-67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd17b935e_550e_4a26_8974_0d8c70f0657f.slice/crio-67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.387921 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr57h\" (UniqueName: \"kubernetes.io/projected/499bbc68-a6dd-4670-acef-2dfcce904fc3-kube-api-access-dr57h\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.388094 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/499bbc68-a6dd-4670-acef-2dfcce904fc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.388248 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/499bbc68-a6dd-4670-acef-2dfcce904fc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.391149 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/499bbc68-a6dd-4670-acef-2dfcce904fc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.398609 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/499bbc68-a6dd-4670-acef-2dfcce904fc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.412171 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr57h\" (UniqueName: \"kubernetes.io/projected/499bbc68-a6dd-4670-acef-2dfcce904fc3-kube-api-access-dr57h\") pod \"marketplace-operator-79b997595-ps6lt\" (UID: \"499bbc68-a6dd-4670-acef-2dfcce904fc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.472725 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4 is running failed: container process not found" containerID="67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.473731 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4 is running failed: container process not found" containerID="67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.474709 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4 is running failed: container process not found" containerID="67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.474757 4688 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6gbcb" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="registry-server" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.570626 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.579546 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.641832 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.646026 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.662832 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.663055 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.693650 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn2ph\" (UniqueName: \"kubernetes.io/projected/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-kube-api-access-pn2ph\") pod \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.694082 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-utilities\") pod \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.694136 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-catalog-content\") pod \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\" (UID: \"c57b3e0f-17f5-42a4-bc38-40f6d101aecf\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.698395 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-kube-api-access-pn2ph" (OuterVolumeSpecName: "kube-api-access-pn2ph") pod "c57b3e0f-17f5-42a4-bc38-40f6d101aecf" (UID: "c57b3e0f-17f5-42a4-bc38-40f6d101aecf"). InnerVolumeSpecName "kube-api-access-pn2ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.699685 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-utilities" (OuterVolumeSpecName: "utilities") pod "c57b3e0f-17f5-42a4-bc38-40f6d101aecf" (UID: "c57b3e0f-17f5-42a4-bc38-40f6d101aecf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.777829 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c57b3e0f-17f5-42a4-bc38-40f6d101aecf" (UID: "c57b3e0f-17f5-42a4-bc38-40f6d101aecf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796355 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-utilities\") pod \"d17b935e-550e-4a26-8974-0d8c70f0657f\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796443 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9glnf\" (UniqueName: \"kubernetes.io/projected/b16521c9-4940-4ab4-acda-cec2b56f285e-kube-api-access-9glnf\") pod \"b16521c9-4940-4ab4-acda-cec2b56f285e\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796513 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-utilities\") pod \"52724116-f0b6-48c0-9de2-a6dc6ba73524\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796573 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-catalog-content\") pod \"d17b935e-550e-4a26-8974-0d8c70f0657f\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796597 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82csw\" (UniqueName: \"kubernetes.io/projected/ed58cdcf-2778-4c9f-8ef5-915035ad0800-kube-api-access-82csw\") pod \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796659 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-operator-metrics\") pod \"b16521c9-4940-4ab4-acda-cec2b56f285e\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796726 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-trusted-ca\") pod \"b16521c9-4940-4ab4-acda-cec2b56f285e\" (UID: \"b16521c9-4940-4ab4-acda-cec2b56f285e\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796804 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-utilities\") pod \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796850 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-catalog-content\") pod \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\" (UID: \"ed58cdcf-2778-4c9f-8ef5-915035ad0800\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796918 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rchkb\" (UniqueName: \"kubernetes.io/projected/52724116-f0b6-48c0-9de2-a6dc6ba73524-kube-api-access-rchkb\") pod \"52724116-f0b6-48c0-9de2-a6dc6ba73524\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.796987 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzk4z\" (UniqueName: \"kubernetes.io/projected/d17b935e-550e-4a26-8974-0d8c70f0657f-kube-api-access-pzk4z\") pod \"d17b935e-550e-4a26-8974-0d8c70f0657f\" (UID: \"d17b935e-550e-4a26-8974-0d8c70f0657f\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.797017 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-catalog-content\") pod \"52724116-f0b6-48c0-9de2-a6dc6ba73524\" (UID: \"52724116-f0b6-48c0-9de2-a6dc6ba73524\") " Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.797344 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-utilities" (OuterVolumeSpecName: "utilities") pod "d17b935e-550e-4a26-8974-0d8c70f0657f" (UID: "d17b935e-550e-4a26-8974-0d8c70f0657f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.797420 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.797437 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn2ph\" (UniqueName: \"kubernetes.io/projected/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-kube-api-access-pn2ph\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.797470 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3e0f-17f5-42a4-bc38-40f6d101aecf-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.797975 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b16521c9-4940-4ab4-acda-cec2b56f285e" (UID: "b16521c9-4940-4ab4-acda-cec2b56f285e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.798596 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-utilities" (OuterVolumeSpecName: "utilities") pod "ed58cdcf-2778-4c9f-8ef5-915035ad0800" (UID: "ed58cdcf-2778-4c9f-8ef5-915035ad0800"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.800625 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed58cdcf-2778-4c9f-8ef5-915035ad0800-kube-api-access-82csw" (OuterVolumeSpecName: "kube-api-access-82csw") pod "ed58cdcf-2778-4c9f-8ef5-915035ad0800" (UID: "ed58cdcf-2778-4c9f-8ef5-915035ad0800"). InnerVolumeSpecName "kube-api-access-82csw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.801226 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16521c9-4940-4ab4-acda-cec2b56f285e-kube-api-access-9glnf" (OuterVolumeSpecName: "kube-api-access-9glnf") pod "b16521c9-4940-4ab4-acda-cec2b56f285e" (UID: "b16521c9-4940-4ab4-acda-cec2b56f285e"). InnerVolumeSpecName "kube-api-access-9glnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.801867 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b16521c9-4940-4ab4-acda-cec2b56f285e" (UID: "b16521c9-4940-4ab4-acda-cec2b56f285e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.802620 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17b935e-550e-4a26-8974-0d8c70f0657f-kube-api-access-pzk4z" (OuterVolumeSpecName: "kube-api-access-pzk4z") pod "d17b935e-550e-4a26-8974-0d8c70f0657f" (UID: "d17b935e-550e-4a26-8974-0d8c70f0657f"). InnerVolumeSpecName "kube-api-access-pzk4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.811209 4688 generic.go:334] "Generic (PLEG): container finished" podID="b16521c9-4940-4ab4-acda-cec2b56f285e" containerID="b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c" exitCode=0 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.811294 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.811376 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" event={"ID":"b16521c9-4940-4ab4-acda-cec2b56f285e","Type":"ContainerDied","Data":"b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.811489 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5gsx" event={"ID":"b16521c9-4940-4ab4-acda-cec2b56f285e","Type":"ContainerDied","Data":"fbc73f735ebdbb559630f43556005b133bf5795fd720644432fd894a6c6188e6"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.811668 4688 scope.go:117] "RemoveContainer" containerID="b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.814368 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52724116-f0b6-48c0-9de2-a6dc6ba73524-kube-api-access-rchkb" (OuterVolumeSpecName: "kube-api-access-rchkb") pod "52724116-f0b6-48c0-9de2-a6dc6ba73524" (UID: "52724116-f0b6-48c0-9de2-a6dc6ba73524"). InnerVolumeSpecName "kube-api-access-rchkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.818109 4688 generic.go:334] "Generic (PLEG): container finished" podID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerID="66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac" exitCode=0 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.818233 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2glp" event={"ID":"c57b3e0f-17f5-42a4-bc38-40f6d101aecf","Type":"ContainerDied","Data":"66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.818311 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2glp" event={"ID":"c57b3e0f-17f5-42a4-bc38-40f6d101aecf","Type":"ContainerDied","Data":"4640c7f67923c4b9a24899f3de570f30dca70e4a86c41423e7edde49639ab0e2"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.818339 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2glp" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.823805 4688 generic.go:334] "Generic (PLEG): container finished" podID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerID="f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9" exitCode=0 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.823875 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g9dv" event={"ID":"52724116-f0b6-48c0-9de2-a6dc6ba73524","Type":"ContainerDied","Data":"f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.824004 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g9dv" event={"ID":"52724116-f0b6-48c0-9de2-a6dc6ba73524","Type":"ContainerDied","Data":"9f3b88444215791232340b269e451dfe659fa69efcacdcd83da22ec9d9d5dce4"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.824054 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6g9dv" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.826140 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d17b935e-550e-4a26-8974-0d8c70f0657f" (UID: "d17b935e-550e-4a26-8974-0d8c70f0657f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.828445 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-utilities" (OuterVolumeSpecName: "utilities") pod "52724116-f0b6-48c0-9de2-a6dc6ba73524" (UID: "52724116-f0b6-48c0-9de2-a6dc6ba73524"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.831723 4688 generic.go:334] "Generic (PLEG): container finished" podID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerID="2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf" exitCode=0 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.831819 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps6n8" event={"ID":"ed58cdcf-2778-4c9f-8ef5-915035ad0800","Type":"ContainerDied","Data":"2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.831841 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps6n8" event={"ID":"ed58cdcf-2778-4c9f-8ef5-915035ad0800","Type":"ContainerDied","Data":"9319e4e399670e9123968b1306dd5b446017f0849c02bf7dd102a5cbbb938d65"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.831930 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps6n8" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.841875 4688 generic.go:334] "Generic (PLEG): container finished" podID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerID="67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4" exitCode=0 Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.841936 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbcb" event={"ID":"d17b935e-550e-4a26-8974-0d8c70f0657f","Type":"ContainerDied","Data":"67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.841971 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbcb" event={"ID":"d17b935e-550e-4a26-8974-0d8c70f0657f","Type":"ContainerDied","Data":"20a5db94837f5f3de9f5858843571885e743872bfdfb840e96681d5e1119a917"} Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.842035 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gbcb" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.847991 4688 scope.go:117] "RemoveContainer" containerID="b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c" Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.849504 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c\": container with ID starting with b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c not found: ID does not exist" containerID="b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.850596 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c"} err="failed to get container status \"b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c\": rpc error: code = NotFound desc = could not find container \"b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c\": container with ID starting with b9493c8e25c087952432691f2336892451d9b806cb940dcee5006337377d3e5c not found: ID does not exist" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.850641 4688 scope.go:117] "RemoveContainer" containerID="66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.869894 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5gsx"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.893160 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52724116-f0b6-48c0-9de2-a6dc6ba73524" (UID: "52724116-f0b6-48c0-9de2-a6dc6ba73524"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.893422 4688 scope.go:117] "RemoveContainer" containerID="b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.895022 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5gsx"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902617 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzk4z\" (UniqueName: \"kubernetes.io/projected/d17b935e-550e-4a26-8974-0d8c70f0657f-kube-api-access-pzk4z\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902654 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902666 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902678 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9glnf\" (UniqueName: \"kubernetes.io/projected/b16521c9-4940-4ab4-acda-cec2b56f285e-kube-api-access-9glnf\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902687 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52724116-f0b6-48c0-9de2-a6dc6ba73524-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902699 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17b935e-550e-4a26-8974-0d8c70f0657f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902710 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82csw\" (UniqueName: \"kubernetes.io/projected/ed58cdcf-2778-4c9f-8ef5-915035ad0800-kube-api-access-82csw\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902720 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902730 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16521c9-4940-4ab4-acda-cec2b56f285e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902743 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.902753 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rchkb\" (UniqueName: \"kubernetes.io/projected/52724116-f0b6-48c0-9de2-a6dc6ba73524-kube-api-access-rchkb\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.903769 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed58cdcf-2778-4c9f-8ef5-915035ad0800" (UID: "ed58cdcf-2778-4c9f-8ef5-915035ad0800"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.906962 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2glp"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.914533 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2glp"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.920007 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbcb"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.922112 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbcb"] Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.923499 4688 scope.go:117] "RemoveContainer" containerID="be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.943917 4688 scope.go:117] "RemoveContainer" containerID="66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac" Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.944303 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac\": container with ID starting with 66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac not found: ID does not exist" containerID="66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.944343 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac"} err="failed to get container status \"66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac\": rpc error: code = NotFound desc = could not find container \"66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac\": container with ID starting with 66db0712edd72098e499c1a6d2fecfb9025dbc6cd170c8850ddd6511a9d56cac not found: ID does not exist" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.944369 4688 scope.go:117] "RemoveContainer" containerID="b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed" Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.944648 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed\": container with ID starting with b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed not found: ID does not exist" containerID="b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.944674 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed"} err="failed to get container status \"b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed\": rpc error: code = NotFound desc = could not find container \"b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed\": container with ID starting with b5e7bff50546b4bbb058424eb3a5e8d0938849f68c9985d40b2e0ba579b5aaed not found: ID does not exist" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.944691 4688 scope.go:117] "RemoveContainer" containerID="be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1" Nov 25 12:18:43 crc kubenswrapper[4688]: E1125 12:18:43.944949 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1\": container with ID starting with be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1 not found: ID does not exist" containerID="be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.944973 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1"} err="failed to get container status \"be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1\": rpc error: code = NotFound desc = could not find container \"be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1\": container with ID starting with be09d79de6c728be2a6e058b4e57d9292d280de9e1ba8ca32665951a38b3d0f1 not found: ID does not exist" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.944986 4688 scope.go:117] "RemoveContainer" containerID="f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.962333 4688 scope.go:117] "RemoveContainer" containerID="13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08" Nov 25 12:18:43 crc kubenswrapper[4688]: I1125 12:18:43.982162 4688 scope.go:117] "RemoveContainer" containerID="25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.003390 4688 scope.go:117] "RemoveContainer" containerID="f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.004031 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed58cdcf-2778-4c9f-8ef5-915035ad0800-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.004111 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9\": container with ID starting with f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9 not found: ID does not exist" containerID="f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.004136 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9"} err="failed to get container status \"f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9\": rpc error: code = NotFound desc = could not find container \"f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9\": container with ID starting with f3620fa30bafebf3ccec3e49089f4078066a80a11d61f4d596b0a4c2254a20c9 not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.004159 4688 scope.go:117] "RemoveContainer" containerID="13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.004424 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08\": container with ID starting with 13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08 not found: ID does not exist" containerID="13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.004451 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08"} err="failed to get container status \"13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08\": rpc error: code = NotFound desc = could not find container \"13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08\": container with ID starting with 13dd85e6bbf159f3a1333ad7156f759f7632d485323aeea828e6898e7f9baf08 not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.004466 4688 scope.go:117] "RemoveContainer" containerID="25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.005638 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46\": container with ID starting with 25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46 not found: ID does not exist" containerID="25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.005690 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46"} err="failed to get container status \"25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46\": rpc error: code = NotFound desc = could not find container \"25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46\": container with ID starting with 25b6d1affe5ab406be46cd83aa3fbb1f4243f4a6af8e3ac8b4bd2939cfdd1e46 not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.005711 4688 scope.go:117] "RemoveContainer" containerID="2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.026854 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ps6lt"] Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.030981 4688 scope.go:117] "RemoveContainer" containerID="04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.051677 4688 scope.go:117] "RemoveContainer" containerID="92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.072036 4688 scope.go:117] "RemoveContainer" containerID="2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.081188 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf\": container with ID starting with 2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf not found: ID does not exist" containerID="2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.081237 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf"} err="failed to get container status \"2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf\": rpc error: code = NotFound desc = could not find container \"2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf\": container with ID starting with 2582b6fa001e7c7c184fde389404fd8909c2660772cd8b169f95d0fe67e0dbaf not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.081271 4688 scope.go:117] "RemoveContainer" containerID="04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.081720 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e\": container with ID starting with 04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e not found: ID does not exist" containerID="04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.081748 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e"} err="failed to get container status \"04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e\": rpc error: code = NotFound desc = could not find container \"04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e\": container with ID starting with 04be9f5cd53cb9015bcde530b6163199e3f582d8705b7675ce39c8c8f12fe00e not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.081762 4688 scope.go:117] "RemoveContainer" containerID="92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.082511 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8\": container with ID starting with 92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8 not found: ID does not exist" containerID="92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.082581 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8"} err="failed to get container status \"92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8\": rpc error: code = NotFound desc = could not find container \"92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8\": container with ID starting with 92de938d657b8eb6f6dd29f04333185a6e8800e9f2263e1941d20fa874a47cc8 not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.082628 4688 scope.go:117] "RemoveContainer" containerID="67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.118264 4688 scope.go:117] "RemoveContainer" containerID="6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.145779 4688 scope.go:117] "RemoveContainer" containerID="fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.165474 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6g9dv"] Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.172316 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6g9dv"] Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.177789 4688 scope.go:117] "RemoveContainer" containerID="67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.179750 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4\": container with ID starting with 67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4 not found: ID does not exist" containerID="67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.179809 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4"} err="failed to get container status \"67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4\": rpc error: code = NotFound desc = could not find container \"67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4\": container with ID starting with 67bef2296ed74443b4f6e24bc9392b43fa50e1bf0be046188e063fcdb56e7dd4 not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.179845 4688 scope.go:117] "RemoveContainer" containerID="6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.180770 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5\": container with ID starting with 6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5 not found: ID does not exist" containerID="6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.180796 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5"} err="failed to get container status \"6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5\": rpc error: code = NotFound desc = could not find container \"6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5\": container with ID starting with 6c771c31abaa6344bb65dfa7f9c328bb93e62825cfddfafddac3fcaa082173c5 not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.180814 4688 scope.go:117] "RemoveContainer" containerID="fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad" Nov 25 12:18:44 crc kubenswrapper[4688]: E1125 12:18:44.182066 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad\": container with ID starting with fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad not found: ID does not exist" containerID="fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.182088 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad"} err="failed to get container status \"fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad\": rpc error: code = NotFound desc = could not find container \"fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad\": container with ID starting with fbab8769d98cca4a457302913e40831693e6caa26c4a139a6de5ad704324c0ad not found: ID does not exist" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.188276 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ps6n8"] Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.191419 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ps6n8"] Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.746591 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" path="/var/lib/kubelet/pods/52724116-f0b6-48c0-9de2-a6dc6ba73524/volumes" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.748432 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16521c9-4940-4ab4-acda-cec2b56f285e" path="/var/lib/kubelet/pods/b16521c9-4940-4ab4-acda-cec2b56f285e/volumes" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.749121 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" path="/var/lib/kubelet/pods/c57b3e0f-17f5-42a4-bc38-40f6d101aecf/volumes" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.750508 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" path="/var/lib/kubelet/pods/d17b935e-550e-4a26-8974-0d8c70f0657f/volumes" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.751332 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" path="/var/lib/kubelet/pods/ed58cdcf-2778-4c9f-8ef5-915035ad0800/volumes" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.853819 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" event={"ID":"499bbc68-a6dd-4670-acef-2dfcce904fc3","Type":"ContainerStarted","Data":"5b8fcde9ee7b60d6b88a3e046ab4e80cc524b0d1a3cb9469b8da770b94d64703"} Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.853858 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" event={"ID":"499bbc68-a6dd-4670-acef-2dfcce904fc3","Type":"ContainerStarted","Data":"93caa9e3ec48d184ae3d9dd6dd0a839a5c6b1ba96e59c4c24f2e3a7698bce4c9"} Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.854109 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.858622 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" Nov 25 12:18:44 crc kubenswrapper[4688]: I1125 12:18:44.871426 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ps6lt" podStartSLOduration=1.8714092249999998 podStartE2EDuration="1.871409225s" podCreationTimestamp="2025-11-25 12:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:18:44.870926032 +0000 UTC m=+274.980554900" watchObservedRunningTime="2025-11-25 12:18:44.871409225 +0000 UTC m=+274.981038093" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.312771 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mfmlx"] Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.312997 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313012 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313027 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerName="extract-content" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313035 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerName="extract-content" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313049 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16521c9-4940-4ab4-acda-cec2b56f285e" containerName="marketplace-operator" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313058 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16521c9-4940-4ab4-acda-cec2b56f285e" containerName="marketplace-operator" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313067 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313075 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313092 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313100 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313111 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313119 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313130 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerName="extract-utilities" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313138 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerName="extract-utilities" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313152 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerName="extract-utilities" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313161 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerName="extract-utilities" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313172 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="extract-content" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313179 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="extract-content" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313190 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerName="extract-utilities" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313199 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerName="extract-utilities" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313211 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerName="extract-content" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313219 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerName="extract-content" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313232 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerName="extract-content" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313240 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerName="extract-content" Nov 25 12:18:45 crc kubenswrapper[4688]: E1125 12:18:45.313251 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="extract-utilities" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313259 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="extract-utilities" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313361 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed58cdcf-2778-4c9f-8ef5-915035ad0800" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313374 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="52724116-f0b6-48c0-9de2-a6dc6ba73524" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313389 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17b935e-550e-4a26-8974-0d8c70f0657f" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313399 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16521c9-4940-4ab4-acda-cec2b56f285e" containerName="marketplace-operator" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.313410 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57b3e0f-17f5-42a4-bc38-40f6d101aecf" containerName="registry-server" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.314419 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.316747 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.322922 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfmlx"] Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.426205 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-catalog-content\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.426289 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-utilities\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.426329 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7m4\" (UniqueName: \"kubernetes.io/projected/c2722d9c-d618-4c2f-a44e-25bde80431b9-kube-api-access-td7m4\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.509015 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xfkp8"] Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.513715 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.515983 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.517971 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xfkp8"] Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.528809 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td7m4\" (UniqueName: \"kubernetes.io/projected/c2722d9c-d618-4c2f-a44e-25bde80431b9-kube-api-access-td7m4\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.528870 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-catalog-content\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.528911 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-utilities\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.529314 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-utilities\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.529880 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-catalog-content\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.548652 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td7m4\" (UniqueName: \"kubernetes.io/projected/c2722d9c-d618-4c2f-a44e-25bde80431b9-kube-api-access-td7m4\") pod \"redhat-marketplace-mfmlx\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.630507 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-catalog-content\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.630594 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfvtc\" (UniqueName: \"kubernetes.io/projected/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-kube-api-access-pfvtc\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.630671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-utilities\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.632410 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.732322 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfvtc\" (UniqueName: \"kubernetes.io/projected/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-kube-api-access-pfvtc\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.732633 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-utilities\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.732686 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-catalog-content\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.733108 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-utilities\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.733117 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-catalog-content\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.749072 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfvtc\" (UniqueName: \"kubernetes.io/projected/ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f-kube-api-access-pfvtc\") pod \"certified-operators-xfkp8\" (UID: \"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f\") " pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:45 crc kubenswrapper[4688]: I1125 12:18:45.828997 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:46 crc kubenswrapper[4688]: I1125 12:18:46.032266 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfmlx"] Nov 25 12:18:46 crc kubenswrapper[4688]: W1125 12:18:46.042035 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2722d9c_d618_4c2f_a44e_25bde80431b9.slice/crio-2faccb33e131458b0beac0917b541de242b3084f9d7a4618a159a7cb85d91d0d WatchSource:0}: Error finding container 2faccb33e131458b0beac0917b541de242b3084f9d7a4618a159a7cb85d91d0d: Status 404 returned error can't find the container with id 2faccb33e131458b0beac0917b541de242b3084f9d7a4618a159a7cb85d91d0d Nov 25 12:18:46 crc kubenswrapper[4688]: I1125 12:18:46.103964 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xfkp8"] Nov 25 12:18:46 crc kubenswrapper[4688]: W1125 12:18:46.109085 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca8b02cb_547f_4625_ada1_6ce7d2cb8d7f.slice/crio-480b6c6f294ac1fcca83c460d97adba330db5be7392bddeed02771cf103e816e WatchSource:0}: Error finding container 480b6c6f294ac1fcca83c460d97adba330db5be7392bddeed02771cf103e816e: Status 404 returned error can't find the container with id 480b6c6f294ac1fcca83c460d97adba330db5be7392bddeed02771cf103e816e Nov 25 12:18:46 crc kubenswrapper[4688]: I1125 12:18:46.869765 4688 generic.go:334] "Generic (PLEG): container finished" podID="ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f" containerID="c8262becdee6130a21106884da9be448bdc5e406dbf791378bf24d68e304dd83" exitCode=0 Nov 25 12:18:46 crc kubenswrapper[4688]: I1125 12:18:46.869841 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xfkp8" event={"ID":"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f","Type":"ContainerDied","Data":"c8262becdee6130a21106884da9be448bdc5e406dbf791378bf24d68e304dd83"} Nov 25 12:18:46 crc kubenswrapper[4688]: I1125 12:18:46.869868 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xfkp8" event={"ID":"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f","Type":"ContainerStarted","Data":"480b6c6f294ac1fcca83c460d97adba330db5be7392bddeed02771cf103e816e"} Nov 25 12:18:46 crc kubenswrapper[4688]: I1125 12:18:46.872510 4688 generic.go:334] "Generic (PLEG): container finished" podID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerID="04c5c357a3a28ee79be4a6968eddecf0fd4f01f190c2185d3952a93f97cae1e6" exitCode=0 Nov 25 12:18:46 crc kubenswrapper[4688]: I1125 12:18:46.872555 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfmlx" event={"ID":"c2722d9c-d618-4c2f-a44e-25bde80431b9","Type":"ContainerDied","Data":"04c5c357a3a28ee79be4a6968eddecf0fd4f01f190c2185d3952a93f97cae1e6"} Nov 25 12:18:46 crc kubenswrapper[4688]: I1125 12:18:46.872590 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfmlx" event={"ID":"c2722d9c-d618-4c2f-a44e-25bde80431b9","Type":"ContainerStarted","Data":"2faccb33e131458b0beac0917b541de242b3084f9d7a4618a159a7cb85d91d0d"} Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.715939 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vxpfw"] Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.717937 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.721001 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.726292 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxpfw"] Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.858619 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2dd753-bae0-4992-ad54-4fd56d590f82-utilities\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.858967 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmbc\" (UniqueName: \"kubernetes.io/projected/ef2dd753-bae0-4992-ad54-4fd56d590f82-kube-api-access-cxmbc\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.859047 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2dd753-bae0-4992-ad54-4fd56d590f82-catalog-content\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.878389 4688 generic.go:334] "Generic (PLEG): container finished" podID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerID="5de1503317b4f2c9b8cf2ec01b6f08b2c4f1bdf48328aa882520fd9cdf49f5c2" exitCode=0 Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.878446 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfmlx" event={"ID":"c2722d9c-d618-4c2f-a44e-25bde80431b9","Type":"ContainerDied","Data":"5de1503317b4f2c9b8cf2ec01b6f08b2c4f1bdf48328aa882520fd9cdf49f5c2"} Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.880558 4688 generic.go:334] "Generic (PLEG): container finished" podID="ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f" containerID="cbfce606b1ffcb60c3d6031929cdf4253d9d768cb428150aa10cea87878e1fdf" exitCode=0 Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.880582 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xfkp8" event={"ID":"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f","Type":"ContainerDied","Data":"cbfce606b1ffcb60c3d6031929cdf4253d9d768cb428150aa10cea87878e1fdf"} Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.921531 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fjm2h"] Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.922643 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.925070 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fjm2h"] Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.925711 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.960283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2dd753-bae0-4992-ad54-4fd56d590f82-catalog-content\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.960336 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2dd753-bae0-4992-ad54-4fd56d590f82-utilities\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.960427 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxmbc\" (UniqueName: \"kubernetes.io/projected/ef2dd753-bae0-4992-ad54-4fd56d590f82-kube-api-access-cxmbc\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.961154 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef2dd753-bae0-4992-ad54-4fd56d590f82-catalog-content\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.961312 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef2dd753-bae0-4992-ad54-4fd56d590f82-utilities\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:47 crc kubenswrapper[4688]: I1125 12:18:47.980356 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxmbc\" (UniqueName: \"kubernetes.io/projected/ef2dd753-bae0-4992-ad54-4fd56d590f82-kube-api-access-cxmbc\") pod \"community-operators-vxpfw\" (UID: \"ef2dd753-bae0-4992-ad54-4fd56d590f82\") " pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.061896 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-utilities\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.061951 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxxzf\" (UniqueName: \"kubernetes.io/projected/02febfd8-ed0e-431b-aef6-1ae612335540-kube-api-access-nxxzf\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.062049 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-catalog-content\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.089755 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.179791 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-catalog-content\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.180288 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-utilities\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.180325 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxxzf\" (UniqueName: \"kubernetes.io/projected/02febfd8-ed0e-431b-aef6-1ae612335540-kube-api-access-nxxzf\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.180853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-utilities\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.180853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-catalog-content\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.210295 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxxzf\" (UniqueName: \"kubernetes.io/projected/02febfd8-ed0e-431b-aef6-1ae612335540-kube-api-access-nxxzf\") pod \"redhat-operators-fjm2h\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.237199 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.295461 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxpfw"] Nov 25 12:18:48 crc kubenswrapper[4688]: W1125 12:18:48.302591 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef2dd753_bae0_4992_ad54_4fd56d590f82.slice/crio-8efb3520b8996222f409a3ccbcfb2998f3742c8c5def8168aa14648364ec718b WatchSource:0}: Error finding container 8efb3520b8996222f409a3ccbcfb2998f3742c8c5def8168aa14648364ec718b: Status 404 returned error can't find the container with id 8efb3520b8996222f409a3ccbcfb2998f3742c8c5def8168aa14648364ec718b Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.645811 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fjm2h"] Nov 25 12:18:48 crc kubenswrapper[4688]: W1125 12:18:48.654568 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02febfd8_ed0e_431b_aef6_1ae612335540.slice/crio-8995f5ffafa381dbe5a9c3940ceb763e5db81fdb269d696666a8aea115b41a6d WatchSource:0}: Error finding container 8995f5ffafa381dbe5a9c3940ceb763e5db81fdb269d696666a8aea115b41a6d: Status 404 returned error can't find the container with id 8995f5ffafa381dbe5a9c3940ceb763e5db81fdb269d696666a8aea115b41a6d Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.886604 4688 generic.go:334] "Generic (PLEG): container finished" podID="ef2dd753-bae0-4992-ad54-4fd56d590f82" containerID="b3f7735bcb50afb498315a73c5bd5051a90a7f248125d9fca9df4c0ec27b51cc" exitCode=0 Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.886659 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxpfw" event={"ID":"ef2dd753-bae0-4992-ad54-4fd56d590f82","Type":"ContainerDied","Data":"b3f7735bcb50afb498315a73c5bd5051a90a7f248125d9fca9df4c0ec27b51cc"} Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.888391 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxpfw" event={"ID":"ef2dd753-bae0-4992-ad54-4fd56d590f82","Type":"ContainerStarted","Data":"8efb3520b8996222f409a3ccbcfb2998f3742c8c5def8168aa14648364ec718b"} Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.892601 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfmlx" event={"ID":"c2722d9c-d618-4c2f-a44e-25bde80431b9","Type":"ContainerStarted","Data":"51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18"} Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.895057 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xfkp8" event={"ID":"ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f","Type":"ContainerStarted","Data":"3cca1c854ce21f7987b4597450d9a447cc5496e7107c728c6926e4530bada468"} Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.896553 4688 generic.go:334] "Generic (PLEG): container finished" podID="02febfd8-ed0e-431b-aef6-1ae612335540" containerID="a4d30163b3ecd72cac6f2ffb299a358743adc8a944c945f6caf164c4dc611d67" exitCode=0 Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.896586 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjm2h" event={"ID":"02febfd8-ed0e-431b-aef6-1ae612335540","Type":"ContainerDied","Data":"a4d30163b3ecd72cac6f2ffb299a358743adc8a944c945f6caf164c4dc611d67"} Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.896605 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjm2h" event={"ID":"02febfd8-ed0e-431b-aef6-1ae612335540","Type":"ContainerStarted","Data":"8995f5ffafa381dbe5a9c3940ceb763e5db81fdb269d696666a8aea115b41a6d"} Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.953271 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xfkp8" podStartSLOduration=2.457911509 podStartE2EDuration="3.95325074s" podCreationTimestamp="2025-11-25 12:18:45 +0000 UTC" firstStartedPulling="2025-11-25 12:18:46.871843078 +0000 UTC m=+276.981471966" lastFinishedPulling="2025-11-25 12:18:48.367182339 +0000 UTC m=+278.476811197" observedRunningTime="2025-11-25 12:18:48.950009833 +0000 UTC m=+279.059638701" watchObservedRunningTime="2025-11-25 12:18:48.95325074 +0000 UTC m=+279.062879608" Nov 25 12:18:48 crc kubenswrapper[4688]: I1125 12:18:48.966223 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mfmlx" podStartSLOduration=2.479220933 podStartE2EDuration="3.966202238s" podCreationTimestamp="2025-11-25 12:18:45 +0000 UTC" firstStartedPulling="2025-11-25 12:18:46.873891443 +0000 UTC m=+276.983520311" lastFinishedPulling="2025-11-25 12:18:48.360872748 +0000 UTC m=+278.470501616" observedRunningTime="2025-11-25 12:18:48.965090269 +0000 UTC m=+279.074719137" watchObservedRunningTime="2025-11-25 12:18:48.966202238 +0000 UTC m=+279.075831106" Nov 25 12:18:49 crc kubenswrapper[4688]: I1125 12:18:49.904931 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxpfw" event={"ID":"ef2dd753-bae0-4992-ad54-4fd56d590f82","Type":"ContainerStarted","Data":"7271b529c39c0305125412cbcd88ce176edd2b38988046b80d2f20b0c28c6568"} Nov 25 12:18:49 crc kubenswrapper[4688]: I1125 12:18:49.906944 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjm2h" event={"ID":"02febfd8-ed0e-431b-aef6-1ae612335540","Type":"ContainerStarted","Data":"bc04c1cab6758635ea4c54d1b2ad74c2a8de68e70d45d6b76745bed88d448e8a"} Nov 25 12:18:50 crc kubenswrapper[4688]: I1125 12:18:50.913956 4688 generic.go:334] "Generic (PLEG): container finished" podID="02febfd8-ed0e-431b-aef6-1ae612335540" containerID="bc04c1cab6758635ea4c54d1b2ad74c2a8de68e70d45d6b76745bed88d448e8a" exitCode=0 Nov 25 12:18:50 crc kubenswrapper[4688]: I1125 12:18:50.914010 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjm2h" event={"ID":"02febfd8-ed0e-431b-aef6-1ae612335540","Type":"ContainerDied","Data":"bc04c1cab6758635ea4c54d1b2ad74c2a8de68e70d45d6b76745bed88d448e8a"} Nov 25 12:18:50 crc kubenswrapper[4688]: I1125 12:18:50.918053 4688 generic.go:334] "Generic (PLEG): container finished" podID="ef2dd753-bae0-4992-ad54-4fd56d590f82" containerID="7271b529c39c0305125412cbcd88ce176edd2b38988046b80d2f20b0c28c6568" exitCode=0 Nov 25 12:18:50 crc kubenswrapper[4688]: I1125 12:18:50.918092 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxpfw" event={"ID":"ef2dd753-bae0-4992-ad54-4fd56d590f82","Type":"ContainerDied","Data":"7271b529c39c0305125412cbcd88ce176edd2b38988046b80d2f20b0c28c6568"} Nov 25 12:18:51 crc kubenswrapper[4688]: I1125 12:18:51.952936 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjm2h" event={"ID":"02febfd8-ed0e-431b-aef6-1ae612335540","Type":"ContainerStarted","Data":"66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd"} Nov 25 12:18:51 crc kubenswrapper[4688]: I1125 12:18:51.960554 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxpfw" event={"ID":"ef2dd753-bae0-4992-ad54-4fd56d590f82","Type":"ContainerStarted","Data":"150f27fc285166c4fa9808059458c80612caa38b4f8f8dba951004b38d8928a4"} Nov 25 12:18:51 crc kubenswrapper[4688]: I1125 12:18:51.995805 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fjm2h" podStartSLOduration=2.577544929 podStartE2EDuration="4.995783016s" podCreationTimestamp="2025-11-25 12:18:47 +0000 UTC" firstStartedPulling="2025-11-25 12:18:48.909266646 +0000 UTC m=+279.018895514" lastFinishedPulling="2025-11-25 12:18:51.327504733 +0000 UTC m=+281.437133601" observedRunningTime="2025-11-25 12:18:51.977992328 +0000 UTC m=+282.087621216" watchObservedRunningTime="2025-11-25 12:18:51.995783016 +0000 UTC m=+282.105411884" Nov 25 12:18:51 crc kubenswrapper[4688]: I1125 12:18:51.996040 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vxpfw" podStartSLOduration=2.50027913 podStartE2EDuration="4.996035793s" podCreationTimestamp="2025-11-25 12:18:47 +0000 UTC" firstStartedPulling="2025-11-25 12:18:48.887837589 +0000 UTC m=+278.997466457" lastFinishedPulling="2025-11-25 12:18:51.383594262 +0000 UTC m=+281.493223120" observedRunningTime="2025-11-25 12:18:51.99332563 +0000 UTC m=+282.102954498" watchObservedRunningTime="2025-11-25 12:18:51.996035793 +0000 UTC m=+282.105664661" Nov 25 12:18:55 crc kubenswrapper[4688]: I1125 12:18:55.633472 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:55 crc kubenswrapper[4688]: I1125 12:18:55.634076 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:55 crc kubenswrapper[4688]: I1125 12:18:55.689576 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:55 crc kubenswrapper[4688]: I1125 12:18:55.829457 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:55 crc kubenswrapper[4688]: I1125 12:18:55.830699 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:55 crc kubenswrapper[4688]: I1125 12:18:55.870234 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:56 crc kubenswrapper[4688]: I1125 12:18:56.014081 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 12:18:56 crc kubenswrapper[4688]: I1125 12:18:56.026256 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xfkp8" Nov 25 12:18:58 crc kubenswrapper[4688]: I1125 12:18:58.090756 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:58 crc kubenswrapper[4688]: I1125 12:18:58.091644 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:58 crc kubenswrapper[4688]: I1125 12:18:58.131259 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:58 crc kubenswrapper[4688]: I1125 12:18:58.238280 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:58 crc kubenswrapper[4688]: I1125 12:18:58.238329 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:58 crc kubenswrapper[4688]: I1125 12:18:58.278467 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:18:59 crc kubenswrapper[4688]: I1125 12:18:59.037838 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vxpfw" Nov 25 12:18:59 crc kubenswrapper[4688]: I1125 12:18:59.126830 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 12:20:17 crc kubenswrapper[4688]: I1125 12:20:17.854134 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:20:17 crc kubenswrapper[4688]: I1125 12:20:17.854701 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:20:47 crc kubenswrapper[4688]: I1125 12:20:47.854062 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:20:47 crc kubenswrapper[4688]: I1125 12:20:47.854508 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:21:17 crc kubenswrapper[4688]: I1125 12:21:17.854074 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:21:17 crc kubenswrapper[4688]: I1125 12:21:17.854685 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:21:17 crc kubenswrapper[4688]: I1125 12:21:17.854733 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:21:17 crc kubenswrapper[4688]: I1125 12:21:17.855386 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"819b0f817aeb0d3804bd7ffe9f7fb12d89359e0a946dcb6d57d85ef5b466a5d9"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:21:17 crc kubenswrapper[4688]: I1125 12:21:17.855445 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://819b0f817aeb0d3804bd7ffe9f7fb12d89359e0a946dcb6d57d85ef5b466a5d9" gracePeriod=600 Nov 25 12:21:18 crc kubenswrapper[4688]: I1125 12:21:18.859194 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="819b0f817aeb0d3804bd7ffe9f7fb12d89359e0a946dcb6d57d85ef5b466a5d9" exitCode=0 Nov 25 12:21:18 crc kubenswrapper[4688]: I1125 12:21:18.859294 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"819b0f817aeb0d3804bd7ffe9f7fb12d89359e0a946dcb6d57d85ef5b466a5d9"} Nov 25 12:21:18 crc kubenswrapper[4688]: I1125 12:21:18.860246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"fe2ba0924be4215985e9fa9117124142232ec7fc1bf1aff1c7218dd864800a1d"} Nov 25 12:21:18 crc kubenswrapper[4688]: I1125 12:21:18.860283 4688 scope.go:117] "RemoveContainer" containerID="7b02a702b3468e623a1e653ee6f5ea1595546dfe109457e24fb9d0aed2210e79" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.428505 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-knl2c"] Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.430019 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.448096 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-knl2c"] Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.536343 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-registry-tls\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.536667 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c2458095-4d51-45e1-98dd-60827d733c3f-registry-certificates\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.536806 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnmm2\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-kube-api-access-rnmm2\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.536948 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c2458095-4d51-45e1-98dd-60827d733c3f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.537060 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.537409 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c2458095-4d51-45e1-98dd-60827d733c3f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.537568 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2458095-4d51-45e1-98dd-60827d733c3f-trusted-ca\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.537677 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-bound-sa-token\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.563728 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.638782 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c2458095-4d51-45e1-98dd-60827d733c3f-registry-certificates\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.638871 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnmm2\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-kube-api-access-rnmm2\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.638900 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c2458095-4d51-45e1-98dd-60827d733c3f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.638936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c2458095-4d51-45e1-98dd-60827d733c3f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.638976 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2458095-4d51-45e1-98dd-60827d733c3f-trusted-ca\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.638993 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-bound-sa-token\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.639025 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-registry-tls\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.640268 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2458095-4d51-45e1-98dd-60827d733c3f-trusted-ca\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.640460 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c2458095-4d51-45e1-98dd-60827d733c3f-registry-certificates\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.640903 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c2458095-4d51-45e1-98dd-60827d733c3f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.646631 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-registry-tls\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.646780 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c2458095-4d51-45e1-98dd-60827d733c3f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.655110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnmm2\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-kube-api-access-rnmm2\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.655862 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2458095-4d51-45e1-98dd-60827d733c3f-bound-sa-token\") pod \"image-registry-66df7c8f76-knl2c\" (UID: \"c2458095-4d51-45e1-98dd-60827d733c3f\") " pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.748919 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:34 crc kubenswrapper[4688]: I1125 12:22:34.928472 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-knl2c"] Nov 25 12:22:35 crc kubenswrapper[4688]: I1125 12:22:35.295907 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" event={"ID":"c2458095-4d51-45e1-98dd-60827d733c3f","Type":"ContainerStarted","Data":"deb1a9fbd6ec5271d2d20f2384903bd57035305adfdd8316fb757b26bece61a3"} Nov 25 12:22:36 crc kubenswrapper[4688]: I1125 12:22:36.302096 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" event={"ID":"c2458095-4d51-45e1-98dd-60827d733c3f","Type":"ContainerStarted","Data":"f84fe7197f4438ecdc7e247098af3bd287625ac472309916560bf0b42bbdc197"} Nov 25 12:22:36 crc kubenswrapper[4688]: I1125 12:22:36.302471 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:36 crc kubenswrapper[4688]: I1125 12:22:36.333902 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" podStartSLOduration=2.333881859 podStartE2EDuration="2.333881859s" podCreationTimestamp="2025-11-25 12:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:22:36.329704707 +0000 UTC m=+506.439333595" watchObservedRunningTime="2025-11-25 12:22:36.333881859 +0000 UTC m=+506.443510727" Nov 25 12:22:54 crc kubenswrapper[4688]: I1125 12:22:54.755693 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" Nov 25 12:22:54 crc kubenswrapper[4688]: I1125 12:22:54.816493 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fnx92"] Nov 25 12:23:18 crc kubenswrapper[4688]: I1125 12:23:18.702368 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:23:18 crc kubenswrapper[4688]: I1125 12:23:18.703714 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:23:19 crc kubenswrapper[4688]: I1125 12:23:19.873638 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" podUID="287e5654-ecac-4340-ad1f-9a307d57de32" containerName="registry" containerID="cri-o://7d03b2f5fb3b39d595b9f4ad97a41c57d2c4b5765d2765c31c9cc3bf7402414d" gracePeriod=30 Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.742317 4688 generic.go:334] "Generic (PLEG): container finished" podID="287e5654-ecac-4340-ad1f-9a307d57de32" containerID="7d03b2f5fb3b39d595b9f4ad97a41c57d2c4b5765d2765c31c9cc3bf7402414d" exitCode=0 Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.750947 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" event={"ID":"287e5654-ecac-4340-ad1f-9a307d57de32","Type":"ContainerDied","Data":"7d03b2f5fb3b39d595b9f4ad97a41c57d2c4b5765d2765c31c9cc3bf7402414d"} Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.799279 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.922151 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6snn\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-kube-api-access-m6snn\") pod \"287e5654-ecac-4340-ad1f-9a307d57de32\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.923690 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-registry-tls\") pod \"287e5654-ecac-4340-ad1f-9a307d57de32\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.923754 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/287e5654-ecac-4340-ad1f-9a307d57de32-ca-trust-extracted\") pod \"287e5654-ecac-4340-ad1f-9a307d57de32\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.923813 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-registry-certificates\") pod \"287e5654-ecac-4340-ad1f-9a307d57de32\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.923869 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-bound-sa-token\") pod \"287e5654-ecac-4340-ad1f-9a307d57de32\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.923916 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/287e5654-ecac-4340-ad1f-9a307d57de32-installation-pull-secrets\") pod \"287e5654-ecac-4340-ad1f-9a307d57de32\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.923947 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-trusted-ca\") pod \"287e5654-ecac-4340-ad1f-9a307d57de32\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.924108 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"287e5654-ecac-4340-ad1f-9a307d57de32\" (UID: \"287e5654-ecac-4340-ad1f-9a307d57de32\") " Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.925549 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "287e5654-ecac-4340-ad1f-9a307d57de32" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.927714 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "287e5654-ecac-4340-ad1f-9a307d57de32" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.932811 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-kube-api-access-m6snn" (OuterVolumeSpecName: "kube-api-access-m6snn") pod "287e5654-ecac-4340-ad1f-9a307d57de32" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32"). InnerVolumeSpecName "kube-api-access-m6snn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.933975 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287e5654-ecac-4340-ad1f-9a307d57de32-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "287e5654-ecac-4340-ad1f-9a307d57de32" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.935330 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "287e5654-ecac-4340-ad1f-9a307d57de32" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.937313 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "287e5654-ecac-4340-ad1f-9a307d57de32" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.952282 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "287e5654-ecac-4340-ad1f-9a307d57de32" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 12:23:20 crc kubenswrapper[4688]: I1125 12:23:20.955569 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/287e5654-ecac-4340-ad1f-9a307d57de32-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "287e5654-ecac-4340-ad1f-9a307d57de32" (UID: "287e5654-ecac-4340-ad1f-9a307d57de32"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.026817 4688 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/287e5654-ecac-4340-ad1f-9a307d57de32-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.026928 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.026950 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6snn\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-kube-api-access-m6snn\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.026968 4688 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.026988 4688 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/287e5654-ecac-4340-ad1f-9a307d57de32-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.027006 4688 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/287e5654-ecac-4340-ad1f-9a307d57de32-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.027021 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/287e5654-ecac-4340-ad1f-9a307d57de32-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.749400 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" event={"ID":"287e5654-ecac-4340-ad1f-9a307d57de32","Type":"ContainerDied","Data":"c2803f1393d71d93857840ca8887db40d98d63d28f981c6d0599aaf724343729"} Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.749479 4688 scope.go:117] "RemoveContainer" containerID="7d03b2f5fb3b39d595b9f4ad97a41c57d2c4b5765d2765c31c9cc3bf7402414d" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.749608 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fnx92" Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.782299 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fnx92"] Nov 25 12:23:21 crc kubenswrapper[4688]: I1125 12:23:21.785472 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fnx92"] Nov 25 12:23:22 crc kubenswrapper[4688]: I1125 12:23:22.747742 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="287e5654-ecac-4340-ad1f-9a307d57de32" path="/var/lib/kubelet/pods/287e5654-ecac-4340-ad1f-9a307d57de32/volumes" Nov 25 12:23:47 crc kubenswrapper[4688]: I1125 12:23:47.854600 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:23:47 crc kubenswrapper[4688]: I1125 12:23:47.855139 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:24:17 crc kubenswrapper[4688]: I1125 12:24:17.854827 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:24:17 crc kubenswrapper[4688]: I1125 12:24:17.855580 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:24:17 crc kubenswrapper[4688]: I1125 12:24:17.855717 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:24:17 crc kubenswrapper[4688]: I1125 12:24:17.856439 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe2ba0924be4215985e9fa9117124142232ec7fc1bf1aff1c7218dd864800a1d"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:24:17 crc kubenswrapper[4688]: I1125 12:24:17.856502 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://fe2ba0924be4215985e9fa9117124142232ec7fc1bf1aff1c7218dd864800a1d" gracePeriod=600 Nov 25 12:24:19 crc kubenswrapper[4688]: I1125 12:24:19.102790 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="fe2ba0924be4215985e9fa9117124142232ec7fc1bf1aff1c7218dd864800a1d" exitCode=0 Nov 25 12:24:19 crc kubenswrapper[4688]: I1125 12:24:19.102876 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"fe2ba0924be4215985e9fa9117124142232ec7fc1bf1aff1c7218dd864800a1d"} Nov 25 12:24:19 crc kubenswrapper[4688]: I1125 12:24:19.103140 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"f8fce2e1ba2b0b0a8ccd7a9e7c79c4f46ac3a4e41d62d29310173c9c94b065de"} Nov 25 12:24:19 crc kubenswrapper[4688]: I1125 12:24:19.103162 4688 scope.go:117] "RemoveContainer" containerID="819b0f817aeb0d3804bd7ffe9f7fb12d89359e0a946dcb6d57d85ef5b466a5d9" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.549317 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-d2tx4"] Nov 25 12:25:09 crc kubenswrapper[4688]: E1125 12:25:09.550106 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="287e5654-ecac-4340-ad1f-9a307d57de32" containerName="registry" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.550121 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="287e5654-ecac-4340-ad1f-9a307d57de32" containerName="registry" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.550245 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="287e5654-ecac-4340-ad1f-9a307d57de32" containerName="registry" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.550723 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.553791 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fq4xk" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.553941 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.561417 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f4bkk"] Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.562931 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.563409 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.565781 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-d2tx4"] Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.568025 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-29gmw" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.589877 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f4bkk"] Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.589927 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jhcbt"] Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.590509 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-jhcbt" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.594591 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qf9b4" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.602240 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxgv2\" (UniqueName: \"kubernetes.io/projected/3d15c9cb-bc3f-4042-a05e-1a6e66e4348c-kube-api-access-dxgv2\") pod \"cert-manager-cainjector-7f985d654d-d2tx4\" (UID: \"3d15c9cb-bc3f-4042-a05e-1a6e66e4348c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.602300 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqx4k\" (UniqueName: \"kubernetes.io/projected/e46b8030-eb14-4ce3-9519-fdaf23f4f7cb-kube-api-access-qqx4k\") pod \"cert-manager-webhook-5655c58dd6-f4bkk\" (UID: \"e46b8030-eb14-4ce3-9519-fdaf23f4f7cb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.610611 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jhcbt"] Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.703736 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqx4k\" (UniqueName: \"kubernetes.io/projected/e46b8030-eb14-4ce3-9519-fdaf23f4f7cb-kube-api-access-qqx4k\") pod \"cert-manager-webhook-5655c58dd6-f4bkk\" (UID: \"e46b8030-eb14-4ce3-9519-fdaf23f4f7cb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.703799 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj76c\" (UniqueName: \"kubernetes.io/projected/8f855a3c-ac32-447f-8fca-8228aa44f91a-kube-api-access-fj76c\") pod \"cert-manager-5b446d88c5-jhcbt\" (UID: \"8f855a3c-ac32-447f-8fca-8228aa44f91a\") " pod="cert-manager/cert-manager-5b446d88c5-jhcbt" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.703847 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxgv2\" (UniqueName: \"kubernetes.io/projected/3d15c9cb-bc3f-4042-a05e-1a6e66e4348c-kube-api-access-dxgv2\") pod \"cert-manager-cainjector-7f985d654d-d2tx4\" (UID: \"3d15c9cb-bc3f-4042-a05e-1a6e66e4348c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.722478 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqx4k\" (UniqueName: \"kubernetes.io/projected/e46b8030-eb14-4ce3-9519-fdaf23f4f7cb-kube-api-access-qqx4k\") pod \"cert-manager-webhook-5655c58dd6-f4bkk\" (UID: \"e46b8030-eb14-4ce3-9519-fdaf23f4f7cb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.725343 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxgv2\" (UniqueName: \"kubernetes.io/projected/3d15c9cb-bc3f-4042-a05e-1a6e66e4348c-kube-api-access-dxgv2\") pod \"cert-manager-cainjector-7f985d654d-d2tx4\" (UID: \"3d15c9cb-bc3f-4042-a05e-1a6e66e4348c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.804843 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj76c\" (UniqueName: \"kubernetes.io/projected/8f855a3c-ac32-447f-8fca-8228aa44f91a-kube-api-access-fj76c\") pod \"cert-manager-5b446d88c5-jhcbt\" (UID: \"8f855a3c-ac32-447f-8fca-8228aa44f91a\") " pod="cert-manager/cert-manager-5b446d88c5-jhcbt" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.826811 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj76c\" (UniqueName: \"kubernetes.io/projected/8f855a3c-ac32-447f-8fca-8228aa44f91a-kube-api-access-fj76c\") pod \"cert-manager-5b446d88c5-jhcbt\" (UID: \"8f855a3c-ac32-447f-8fca-8228aa44f91a\") " pod="cert-manager/cert-manager-5b446d88c5-jhcbt" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.872107 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.893711 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" Nov 25 12:25:09 crc kubenswrapper[4688]: I1125 12:25:09.905671 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-jhcbt" Nov 25 12:25:10 crc kubenswrapper[4688]: I1125 12:25:10.088472 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-d2tx4"] Nov 25 12:25:10 crc kubenswrapper[4688]: I1125 12:25:10.098237 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:25:10 crc kubenswrapper[4688]: I1125 12:25:10.375835 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f4bkk"] Nov 25 12:25:10 crc kubenswrapper[4688]: I1125 12:25:10.385875 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jhcbt"] Nov 25 12:25:10 crc kubenswrapper[4688]: I1125 12:25:10.386803 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" event={"ID":"3d15c9cb-bc3f-4042-a05e-1a6e66e4348c","Type":"ContainerStarted","Data":"5adbae4430f3722a38289489c429405a22c1da2ba849f5a98b27e2cef184787f"} Nov 25 12:25:10 crc kubenswrapper[4688]: W1125 12:25:10.389511 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f855a3c_ac32_447f_8fca_8228aa44f91a.slice/crio-69c660ea7d003ae9597d1e3c8774cd7369448e05262cc729812556e5edb75c1b WatchSource:0}: Error finding container 69c660ea7d003ae9597d1e3c8774cd7369448e05262cc729812556e5edb75c1b: Status 404 returned error can't find the container with id 69c660ea7d003ae9597d1e3c8774cd7369448e05262cc729812556e5edb75c1b Nov 25 12:25:10 crc kubenswrapper[4688]: W1125 12:25:10.392536 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode46b8030_eb14_4ce3_9519_fdaf23f4f7cb.slice/crio-aaf94966965bd26b666297c2d4b32d4146327b71b316ea4ab0823a7b1cc7dd88 WatchSource:0}: Error finding container aaf94966965bd26b666297c2d4b32d4146327b71b316ea4ab0823a7b1cc7dd88: Status 404 returned error can't find the container with id aaf94966965bd26b666297c2d4b32d4146327b71b316ea4ab0823a7b1cc7dd88 Nov 25 12:25:11 crc kubenswrapper[4688]: I1125 12:25:11.392973 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" event={"ID":"e46b8030-eb14-4ce3-9519-fdaf23f4f7cb","Type":"ContainerStarted","Data":"aaf94966965bd26b666297c2d4b32d4146327b71b316ea4ab0823a7b1cc7dd88"} Nov 25 12:25:11 crc kubenswrapper[4688]: I1125 12:25:11.393738 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jhcbt" event={"ID":"8f855a3c-ac32-447f-8fca-8228aa44f91a","Type":"ContainerStarted","Data":"69c660ea7d003ae9597d1e3c8774cd7369448e05262cc729812556e5edb75c1b"} Nov 25 12:25:14 crc kubenswrapper[4688]: I1125 12:25:14.408302 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" event={"ID":"3d15c9cb-bc3f-4042-a05e-1a6e66e4348c","Type":"ContainerStarted","Data":"5001e9df24bae42e2e3036bb7ac2ed906c4ba6793e7df9fe830c2f537bee5332"} Nov 25 12:25:14 crc kubenswrapper[4688]: I1125 12:25:14.423882 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" podStartSLOduration=2.181761557 podStartE2EDuration="5.423862605s" podCreationTimestamp="2025-11-25 12:25:09 +0000 UTC" firstStartedPulling="2025-11-25 12:25:10.097992568 +0000 UTC m=+660.207621436" lastFinishedPulling="2025-11-25 12:25:13.340093616 +0000 UTC m=+663.449722484" observedRunningTime="2025-11-25 12:25:14.419838567 +0000 UTC m=+664.529467435" watchObservedRunningTime="2025-11-25 12:25:14.423862605 +0000 UTC m=+664.533491473" Nov 25 12:25:16 crc kubenswrapper[4688]: I1125 12:25:16.419821 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jhcbt" event={"ID":"8f855a3c-ac32-447f-8fca-8228aa44f91a","Type":"ContainerStarted","Data":"c994bd560a79f4807189bcb1c30104528908497675c1619ef3af12dfddfca082"} Nov 25 12:25:16 crc kubenswrapper[4688]: I1125 12:25:16.421499 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" event={"ID":"e46b8030-eb14-4ce3-9519-fdaf23f4f7cb","Type":"ContainerStarted","Data":"7ecd7c4993c61378d8a9b0f83439e9f6e11ed7b8cea134350dced088bc82ec17"} Nov 25 12:25:16 crc kubenswrapper[4688]: I1125 12:25:16.421572 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" Nov 25 12:25:16 crc kubenswrapper[4688]: I1125 12:25:16.431277 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-jhcbt" podStartSLOduration=2.151990447 podStartE2EDuration="7.43126013s" podCreationTimestamp="2025-11-25 12:25:09 +0000 UTC" firstStartedPulling="2025-11-25 12:25:10.391409601 +0000 UTC m=+660.501038469" lastFinishedPulling="2025-11-25 12:25:15.670679284 +0000 UTC m=+665.780308152" observedRunningTime="2025-11-25 12:25:16.431038473 +0000 UTC m=+666.540667341" watchObservedRunningTime="2025-11-25 12:25:16.43126013 +0000 UTC m=+666.540888998" Nov 25 12:25:16 crc kubenswrapper[4688]: I1125 12:25:16.444670 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" podStartSLOduration=2.278501543 podStartE2EDuration="7.44465458s" podCreationTimestamp="2025-11-25 12:25:09 +0000 UTC" firstStartedPulling="2025-11-25 12:25:10.396761115 +0000 UTC m=+660.506390023" lastFinishedPulling="2025-11-25 12:25:15.562914192 +0000 UTC m=+665.672543060" observedRunningTime="2025-11-25 12:25:16.443792367 +0000 UTC m=+666.553421235" watchObservedRunningTime="2025-11-25 12:25:16.44465458 +0000 UTC m=+666.554283448" Nov 25 12:25:24 crc kubenswrapper[4688]: I1125 12:25:24.897128 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-f4bkk" Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.651845 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-csgdv"] Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.652919 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovn-controller" containerID="cri-o://0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2" gracePeriod=30 Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.653350 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="sbdb" containerID="cri-o://9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" gracePeriod=30 Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.653402 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="nbdb" containerID="cri-o://8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" gracePeriod=30 Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.653443 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="northd" containerID="cri-o://2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d" gracePeriod=30 Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.653481 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b" gracePeriod=30 Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.653545 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kube-rbac-proxy-node" containerID="cri-o://051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5" gracePeriod=30 Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.653645 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovn-acl-logging" containerID="cri-o://d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def" gracePeriod=30 Nov 25 12:25:37 crc kubenswrapper[4688]: I1125 12:25:37.693609 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" containerID="cri-o://40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" gracePeriod=30 Nov 25 12:25:37 crc kubenswrapper[4688]: E1125 12:25:37.885893 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 is running failed: container process not found" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 12:25:37 crc kubenswrapper[4688]: E1125 12:25:37.886139 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c is running failed: container process not found" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 12:25:37 crc kubenswrapper[4688]: E1125 12:25:37.886507 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c is running failed: container process not found" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 12:25:37 crc kubenswrapper[4688]: E1125 12:25:37.886660 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 is running failed: container process not found" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 12:25:37 crc kubenswrapper[4688]: E1125 12:25:37.887031 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 is running failed: container process not found" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 12:25:37 crc kubenswrapper[4688]: E1125 12:25:37.887066 4688 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="sbdb" Nov 25 12:25:37 crc kubenswrapper[4688]: E1125 12:25:37.887109 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c is running failed: container process not found" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 12:25:37 crc kubenswrapper[4688]: E1125 12:25:37.887144 4688 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="nbdb" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.015364 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/3.log" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.018093 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovn-acl-logging/0.log" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.018855 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovn-controller/0.log" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.019481 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076171 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tkzmk"] Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076496 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovn-acl-logging" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076558 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovn-acl-logging" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076584 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="sbdb" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076596 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="sbdb" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076614 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovn-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076625 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovn-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076635 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="nbdb" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076646 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="nbdb" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076664 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076676 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076690 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076701 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076716 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kube-rbac-proxy-node" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076726 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kube-rbac-proxy-node" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076740 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076748 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076760 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kubecfg-setup" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076767 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kubecfg-setup" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076775 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076783 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.076797 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="northd" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076806 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="northd" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076918 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076931 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="nbdb" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076939 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076948 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovn-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076960 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076971 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076980 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076990 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="kube-rbac-proxy-node" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.076999 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="sbdb" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.077006 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="northd" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.077014 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovn-acl-logging" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.077110 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.077119 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.077131 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.077137 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.077253 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerName="ovnkube-controller" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.079023 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159253 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-openvswitch\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159315 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-log-socket\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159341 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-config\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159363 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-netns\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159369 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159384 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-systemd\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159409 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovn-node-metrics-cert\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159481 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-kubelet\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159568 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159722 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-log-socket" (OuterVolumeSpecName: "log-socket") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159771 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-ovn-kubernetes\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159781 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159801 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159828 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159840 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159850 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159907 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-node-log\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159933 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-systemd-units\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159961 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-var-lib-openvswitch\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159984 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-node-log" (OuterVolumeSpecName: "node-log") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159988 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-netd\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.159995 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160007 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160021 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-script-lib\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160028 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160044 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-env-overrides\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160070 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-etc-openvswitch\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160112 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-bin\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160134 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-ovn\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160153 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z95pk\" (UniqueName: \"kubernetes.io/projected/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-kube-api-access-z95pk\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160172 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-slash\") pod \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\" (UID: \"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62\") " Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160188 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160192 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160218 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160303 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-slash" (OuterVolumeSpecName: "host-slash") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160319 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-etc-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160348 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovnkube-script-lib\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160368 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160377 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160400 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-systemd-units\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160422 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-ovn\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160443 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovn-node-metrics-cert\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160462 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160507 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-systemd\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160555 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160574 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-kubelet\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160589 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-cni-netd\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160604 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovnkube-config\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160619 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-node-log\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160660 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160700 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-slash\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160718 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-env-overrides\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160738 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-run-netns\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160799 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-cni-bin\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160833 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-log-socket\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160859 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-var-lib-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160890 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5r4\" (UniqueName: \"kubernetes.io/projected/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-kube-api-access-ct5r4\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160948 4688 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160960 4688 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160972 4688 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160987 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.160997 4688 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161007 4688 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161017 4688 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161026 4688 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161034 4688 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161044 4688 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161052 4688 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161060 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161068 4688 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161075 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161082 4688 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161091 4688 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.161099 4688 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.164496 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-kube-api-access-z95pk" (OuterVolumeSpecName: "kube-api-access-z95pk") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "kube-api-access-z95pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.164705 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.172012 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" (UID: "c9bf79ce-8d9b-472b-93a8-8e4c779bfb62"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.261652 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-var-lib-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.261739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct5r4\" (UniqueName: \"kubernetes.io/projected/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-kube-api-access-ct5r4\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.261767 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovnkube-script-lib\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.261764 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-var-lib-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262175 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-etc-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262242 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-etc-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262285 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262337 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-systemd-units\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262357 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-ovn\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262377 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovn-node-metrics-cert\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262449 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-systemd\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262472 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262489 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-kubelet\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262504 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-cni-netd\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262543 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovnkube-config\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262561 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-node-log\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262579 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-slash\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262611 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-env-overrides\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262632 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-run-netns\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262650 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-cni-bin\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262671 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-log-socket\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262706 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z95pk\" (UniqueName: \"kubernetes.io/projected/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-kube-api-access-z95pk\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262718 4688 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262727 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262750 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-log-socket\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262387 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262785 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-systemd\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262418 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-ovn\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262817 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-run-openvswitch\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262906 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-node-log\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262414 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-systemd-units\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262906 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-run-netns\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262924 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262935 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-slash\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262944 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-cni-bin\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262954 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-cni-netd\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.262936 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-host-kubelet\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.263416 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-env-overrides\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.263617 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovnkube-config\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.263958 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovnkube-script-lib\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.267889 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-ovn-node-metrics-cert\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.278513 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct5r4\" (UniqueName: \"kubernetes.io/projected/9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2-kube-api-access-ct5r4\") pod \"ovnkube-node-tkzmk\" (UID: \"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.395990 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.544810 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovnkube-controller/3.log" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547166 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovn-acl-logging/0.log" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547579 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-csgdv_c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/ovn-controller/0.log" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547877 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" exitCode=0 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547901 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" exitCode=0 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547912 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" exitCode=0 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547922 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d" exitCode=0 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547933 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b" exitCode=0 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547941 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5" exitCode=0 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547947 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547976 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.547949 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def" exitCode=143 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548027 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" containerID="0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2" exitCode=143 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548047 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548069 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548078 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548087 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548085 4688 scope.go:117] "RemoveContainer" containerID="40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548100 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548200 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548216 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548223 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548230 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548236 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548242 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548248 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548274 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548287 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548303 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548312 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548319 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548326 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548333 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548340 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548346 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548353 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548361 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548368 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548404 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548419 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548428 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548434 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548484 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548494 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548500 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548508 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548515 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548547 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548554 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548565 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-csgdv" event={"ID":"c9bf79ce-8d9b-472b-93a8-8e4c779bfb62","Type":"ContainerDied","Data":"5ae30d5c11ebc64bae4a82a00c3572e3611fb12e00dfc381aac881784f70ec52"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548578 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548586 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548594 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548600 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548607 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548613 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548622 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548628 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548635 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.548643 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.555914 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/2.log" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.560110 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/1.log" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.560176 4688 generic.go:334] "Generic (PLEG): container finished" podID="6c3971fa-9838-436e-97b1-be050abea83a" containerID="a3e9c6a69286c30e5e1065345a9b07bc7c55dbdb934f75898c22e5a18d024119" exitCode=2 Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.560268 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlfw5" event={"ID":"6c3971fa-9838-436e-97b1-be050abea83a","Type":"ContainerDied","Data":"a3e9c6a69286c30e5e1065345a9b07bc7c55dbdb934f75898c22e5a18d024119"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.560317 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.560989 4688 scope.go:117] "RemoveContainer" containerID="a3e9c6a69286c30e5e1065345a9b07bc7c55dbdb934f75898c22e5a18d024119" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.561472 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xlfw5_openshift-multus(6c3971fa-9838-436e-97b1-be050abea83a)\"" pod="openshift-multus/multus-xlfw5" podUID="6c3971fa-9838-436e-97b1-be050abea83a" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.562329 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"26b753debca424fa8b8bdd21254030ad5afefddabfd5c00579f0dbec30395c9c"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.562387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"4c6e4c2fd84f0acf4bb51f6e1df3aba68b12cd61af01ae7d453ec9248334a415"} Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.609949 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.632753 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-csgdv"] Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.644206 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-csgdv"] Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.655965 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a7bf57c_5584_4aad_a6c8_d3a641e9fdf2.slice/crio-conmon-26b753debca424fa8b8bdd21254030ad5afefddabfd5c00579f0dbec30395c9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a7bf57c_5584_4aad_a6c8_d3a641e9fdf2.slice/crio-26b753debca424fa8b8bdd21254030ad5afefddabfd5c00579f0dbec30395c9c.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.666900 4688 scope.go:117] "RemoveContainer" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.689636 4688 scope.go:117] "RemoveContainer" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.703687 4688 scope.go:117] "RemoveContainer" containerID="2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.719581 4688 scope.go:117] "RemoveContainer" containerID="50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.732448 4688 scope.go:117] "RemoveContainer" containerID="051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.748244 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9bf79ce-8d9b-472b-93a8-8e4c779bfb62" path="/var/lib/kubelet/pods/c9bf79ce-8d9b-472b-93a8-8e4c779bfb62/volumes" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.749435 4688 scope.go:117] "RemoveContainer" containerID="d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.765888 4688 scope.go:117] "RemoveContainer" containerID="0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.785675 4688 scope.go:117] "RemoveContainer" containerID="bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.828781 4688 scope.go:117] "RemoveContainer" containerID="40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.829273 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": container with ID starting with 40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f not found: ID does not exist" containerID="40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.829300 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} err="failed to get container status \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": rpc error: code = NotFound desc = could not find container \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": container with ID starting with 40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.829320 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.829586 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": container with ID starting with e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138 not found: ID does not exist" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.829615 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} err="failed to get container status \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": rpc error: code = NotFound desc = could not find container \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": container with ID starting with e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.829631 4688 scope.go:117] "RemoveContainer" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.829841 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": container with ID starting with 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 not found: ID does not exist" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.829868 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} err="failed to get container status \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": rpc error: code = NotFound desc = could not find container \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": container with ID starting with 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.829885 4688 scope.go:117] "RemoveContainer" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.830253 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": container with ID starting with 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c not found: ID does not exist" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.830307 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} err="failed to get container status \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": rpc error: code = NotFound desc = could not find container \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": container with ID starting with 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.830341 4688 scope.go:117] "RemoveContainer" containerID="2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.830665 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": container with ID starting with 2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d not found: ID does not exist" containerID="2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.830700 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} err="failed to get container status \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": rpc error: code = NotFound desc = could not find container \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": container with ID starting with 2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.830718 4688 scope.go:117] "RemoveContainer" containerID="50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.830953 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": container with ID starting with 50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b not found: ID does not exist" containerID="50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.830981 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} err="failed to get container status \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": rpc error: code = NotFound desc = could not find container \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": container with ID starting with 50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.831000 4688 scope.go:117] "RemoveContainer" containerID="051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.831245 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": container with ID starting with 051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5 not found: ID does not exist" containerID="051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.831276 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} err="failed to get container status \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": rpc error: code = NotFound desc = could not find container \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": container with ID starting with 051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.831295 4688 scope.go:117] "RemoveContainer" containerID="d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.831579 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": container with ID starting with d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def not found: ID does not exist" containerID="d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.831720 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} err="failed to get container status \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": rpc error: code = NotFound desc = could not find container \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": container with ID starting with d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.831741 4688 scope.go:117] "RemoveContainer" containerID="0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.832028 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": container with ID starting with 0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2 not found: ID does not exist" containerID="0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.832058 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} err="failed to get container status \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": rpc error: code = NotFound desc = could not find container \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": container with ID starting with 0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.832075 4688 scope.go:117] "RemoveContainer" containerID="bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d" Nov 25 12:25:38 crc kubenswrapper[4688]: E1125 12:25:38.832289 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": container with ID starting with bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d not found: ID does not exist" containerID="bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.832314 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} err="failed to get container status \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": rpc error: code = NotFound desc = could not find container \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": container with ID starting with bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.832328 4688 scope.go:117] "RemoveContainer" containerID="40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.832650 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} err="failed to get container status \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": rpc error: code = NotFound desc = could not find container \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": container with ID starting with 40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.832679 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.832968 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} err="failed to get container status \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": rpc error: code = NotFound desc = could not find container \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": container with ID starting with e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.833010 4688 scope.go:117] "RemoveContainer" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.833288 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} err="failed to get container status \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": rpc error: code = NotFound desc = could not find container \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": container with ID starting with 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.833319 4688 scope.go:117] "RemoveContainer" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.833510 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} err="failed to get container status \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": rpc error: code = NotFound desc = could not find container \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": container with ID starting with 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.833544 4688 scope.go:117] "RemoveContainer" containerID="2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.833807 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} err="failed to get container status \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": rpc error: code = NotFound desc = could not find container \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": container with ID starting with 2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.833829 4688 scope.go:117] "RemoveContainer" containerID="50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.834204 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} err="failed to get container status \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": rpc error: code = NotFound desc = could not find container \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": container with ID starting with 50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.834230 4688 scope.go:117] "RemoveContainer" containerID="051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.834630 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} err="failed to get container status \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": rpc error: code = NotFound desc = could not find container \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": container with ID starting with 051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.834659 4688 scope.go:117] "RemoveContainer" containerID="d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.834909 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} err="failed to get container status \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": rpc error: code = NotFound desc = could not find container \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": container with ID starting with d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.834933 4688 scope.go:117] "RemoveContainer" containerID="0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.835163 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} err="failed to get container status \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": rpc error: code = NotFound desc = could not find container \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": container with ID starting with 0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.835194 4688 scope.go:117] "RemoveContainer" containerID="bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.835719 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} err="failed to get container status \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": rpc error: code = NotFound desc = could not find container \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": container with ID starting with bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.835747 4688 scope.go:117] "RemoveContainer" containerID="40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.838333 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} err="failed to get container status \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": rpc error: code = NotFound desc = could not find container \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": container with ID starting with 40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.838368 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.838630 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} err="failed to get container status \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": rpc error: code = NotFound desc = could not find container \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": container with ID starting with e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.838651 4688 scope.go:117] "RemoveContainer" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.838870 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} err="failed to get container status \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": rpc error: code = NotFound desc = could not find container \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": container with ID starting with 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.838895 4688 scope.go:117] "RemoveContainer" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.839103 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} err="failed to get container status \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": rpc error: code = NotFound desc = could not find container \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": container with ID starting with 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.839125 4688 scope.go:117] "RemoveContainer" containerID="2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.839327 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} err="failed to get container status \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": rpc error: code = NotFound desc = could not find container \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": container with ID starting with 2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.839354 4688 scope.go:117] "RemoveContainer" containerID="50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.839560 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} err="failed to get container status \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": rpc error: code = NotFound desc = could not find container \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": container with ID starting with 50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.839582 4688 scope.go:117] "RemoveContainer" containerID="051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.839791 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} err="failed to get container status \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": rpc error: code = NotFound desc = could not find container \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": container with ID starting with 051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.839813 4688 scope.go:117] "RemoveContainer" containerID="d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840011 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} err="failed to get container status \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": rpc error: code = NotFound desc = could not find container \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": container with ID starting with d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840031 4688 scope.go:117] "RemoveContainer" containerID="0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840228 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} err="failed to get container status \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": rpc error: code = NotFound desc = could not find container \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": container with ID starting with 0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840250 4688 scope.go:117] "RemoveContainer" containerID="bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840444 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} err="failed to get container status \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": rpc error: code = NotFound desc = could not find container \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": container with ID starting with bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840470 4688 scope.go:117] "RemoveContainer" containerID="40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840684 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} err="failed to get container status \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": rpc error: code = NotFound desc = could not find container \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": container with ID starting with 40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840704 4688 scope.go:117] "RemoveContainer" containerID="e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840895 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138"} err="failed to get container status \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": rpc error: code = NotFound desc = could not find container \"e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138\": container with ID starting with e55a025651b4b2be497ce03110898d0b0b1ae0cd30547430f9739af29bba5138 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.840916 4688 scope.go:117] "RemoveContainer" containerID="9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.841099 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9"} err="failed to get container status \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": rpc error: code = NotFound desc = could not find container \"9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9\": container with ID starting with 9c9658c0cf63904556a05049402b6f7cddc8b55f7f41c15d60dc86083a457ad9 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.841120 4688 scope.go:117] "RemoveContainer" containerID="8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.841366 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c"} err="failed to get container status \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": rpc error: code = NotFound desc = could not find container \"8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c\": container with ID starting with 8967c538bbd20d1150371afd1e6408c9bd38979d9a521d907897785bc5cd587c not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.841393 4688 scope.go:117] "RemoveContainer" containerID="2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.841625 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d"} err="failed to get container status \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": rpc error: code = NotFound desc = could not find container \"2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d\": container with ID starting with 2188864d09cb4f272c2b9ca4ef028d088688524eee58a678a14d129445c7049d not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.841647 4688 scope.go:117] "RemoveContainer" containerID="50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.841837 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b"} err="failed to get container status \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": rpc error: code = NotFound desc = could not find container \"50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b\": container with ID starting with 50917b2cde4249f3afc5e14a3d05f7a00e8a685ee70f7002b654b03b3cf7364b not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.841858 4688 scope.go:117] "RemoveContainer" containerID="051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.842107 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5"} err="failed to get container status \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": rpc error: code = NotFound desc = could not find container \"051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5\": container with ID starting with 051b22b8193e1feb10b89a0f8d48bf8c0da920a90393d254f8ede825d57ab3c5 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.842129 4688 scope.go:117] "RemoveContainer" containerID="d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.842375 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def"} err="failed to get container status \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": rpc error: code = NotFound desc = could not find container \"d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def\": container with ID starting with d6d902b3ab4bba0767b3709fbd92e6598234a2bb5532db95a7a94f0140657def not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.842392 4688 scope.go:117] "RemoveContainer" containerID="0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.842601 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2"} err="failed to get container status \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": rpc error: code = NotFound desc = could not find container \"0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2\": container with ID starting with 0b007e10283918d3a6a2cf1f87207d32547a0b61f1542e098f3a52147154f9b2 not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.842620 4688 scope.go:117] "RemoveContainer" containerID="bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.842847 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d"} err="failed to get container status \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": rpc error: code = NotFound desc = could not find container \"bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d\": container with ID starting with bcbebd87e78c1dfff2187ec8005d72c91ab014b8ba5935c57b04bbf10a26fe5d not found: ID does not exist" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.842867 4688 scope.go:117] "RemoveContainer" containerID="40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f" Nov 25 12:25:38 crc kubenswrapper[4688]: I1125 12:25:38.843100 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f"} err="failed to get container status \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": rpc error: code = NotFound desc = could not find container \"40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f\": container with ID starting with 40c2b8e1293c6940f250c88d9274d8e86b7b2fbb826611c023d5d6df5a6eca7f not found: ID does not exist" Nov 25 12:25:39 crc kubenswrapper[4688]: I1125 12:25:39.572881 4688 generic.go:334] "Generic (PLEG): container finished" podID="9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2" containerID="26b753debca424fa8b8bdd21254030ad5afefddabfd5c00579f0dbec30395c9c" exitCode=0 Nov 25 12:25:39 crc kubenswrapper[4688]: I1125 12:25:39.573467 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerDied","Data":"26b753debca424fa8b8bdd21254030ad5afefddabfd5c00579f0dbec30395c9c"} Nov 25 12:25:39 crc kubenswrapper[4688]: I1125 12:25:39.575398 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"523d787f98084ebdd4f807678f82fde73e05c64cd089c7e3babd361dd146b063"} Nov 25 12:25:39 crc kubenswrapper[4688]: I1125 12:25:39.575416 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"39070f4dad167aa74491e891c3070e952e80c75c8228e0f08bb372d8c380bdca"} Nov 25 12:25:39 crc kubenswrapper[4688]: I1125 12:25:39.575429 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"35f06015aab2bb4c3faeb3b6c978b2c2d318bd0080e87ef0fd3d908e92e52edd"} Nov 25 12:25:39 crc kubenswrapper[4688]: I1125 12:25:39.575440 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"6d96b7f151107d84b6cdfa51ffad185eba7319dd358f14b6ba6b43e8e4ffc230"} Nov 25 12:25:39 crc kubenswrapper[4688]: I1125 12:25:39.575451 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"2043b0f3e9afdb290f83affbd4fb9f9ae90b9157b74d17c0e215ff0d6fd04889"} Nov 25 12:25:39 crc kubenswrapper[4688]: I1125 12:25:39.575464 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"58be3e18e603eb000be3b8f610141438a89db72eeb4203513e3df871cc4ea669"} Nov 25 12:25:42 crc kubenswrapper[4688]: I1125 12:25:42.594964 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"80b2e0df118573f03fc287ffcb01eda29aa918fa14e0e96bd50068dcf74968b1"} Nov 25 12:25:44 crc kubenswrapper[4688]: I1125 12:25:44.614051 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" event={"ID":"9a7bf57c-5584-4aad-a6c8-d3a641e9fdf2","Type":"ContainerStarted","Data":"aacb1213871b48fc9d9953968a1c8fd0734089e7dc5dc75859174edd646da42e"} Nov 25 12:25:44 crc kubenswrapper[4688]: I1125 12:25:44.614823 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:44 crc kubenswrapper[4688]: I1125 12:25:44.640741 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:44 crc kubenswrapper[4688]: I1125 12:25:44.648293 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" podStartSLOduration=6.648267457 podStartE2EDuration="6.648267457s" podCreationTimestamp="2025-11-25 12:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:25:44.644711161 +0000 UTC m=+694.754340059" watchObservedRunningTime="2025-11-25 12:25:44.648267457 +0000 UTC m=+694.757896335" Nov 25 12:25:45 crc kubenswrapper[4688]: I1125 12:25:45.619631 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:45 crc kubenswrapper[4688]: I1125 12:25:45.619748 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:45 crc kubenswrapper[4688]: I1125 12:25:45.656697 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:25:52 crc kubenswrapper[4688]: I1125 12:25:52.739980 4688 scope.go:117] "RemoveContainer" containerID="a3e9c6a69286c30e5e1065345a9b07bc7c55dbdb934f75898c22e5a18d024119" Nov 25 12:25:52 crc kubenswrapper[4688]: E1125 12:25:52.755168 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xlfw5_openshift-multus(6c3971fa-9838-436e-97b1-be050abea83a)\"" pod="openshift-multus/multus-xlfw5" podUID="6c3971fa-9838-436e-97b1-be050abea83a" Nov 25 12:26:03 crc kubenswrapper[4688]: I1125 12:26:03.740195 4688 scope.go:117] "RemoveContainer" containerID="a3e9c6a69286c30e5e1065345a9b07bc7c55dbdb934f75898c22e5a18d024119" Nov 25 12:26:04 crc kubenswrapper[4688]: I1125 12:26:04.725183 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/2.log" Nov 25 12:26:04 crc kubenswrapper[4688]: I1125 12:26:04.725909 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/1.log" Nov 25 12:26:04 crc kubenswrapper[4688]: I1125 12:26:04.725950 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xlfw5" event={"ID":"6c3971fa-9838-436e-97b1-be050abea83a","Type":"ContainerStarted","Data":"447d9d900ba44444eede2c8aa6dff9a0fd32f0fe2bb70f0ba2de3be4be9e56de"} Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.139288 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr"] Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.141488 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.144005 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.160047 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr"] Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.308202 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqz48\" (UniqueName: \"kubernetes.io/projected/8e87d6d2-0104-4184-a6ca-8bc371a7e768-kube-api-access-xqz48\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.308834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.308974 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.410118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.410220 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqz48\" (UniqueName: \"kubernetes.io/projected/8e87d6d2-0104-4184-a6ca-8bc371a7e768-kube-api-access-xqz48\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.410279 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.410718 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.410940 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.431952 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqz48\" (UniqueName: \"kubernetes.io/projected/8e87d6d2-0104-4184-a6ca-8bc371a7e768-kube-api-access-xqz48\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.469449 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.674689 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr"] Nov 25 12:26:07 crc kubenswrapper[4688]: I1125 12:26:07.743771 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" event={"ID":"8e87d6d2-0104-4184-a6ca-8bc371a7e768","Type":"ContainerStarted","Data":"65c6271b566b7841780f03cf31f151fc936aac35df9504d8c180113afabbbaa5"} Nov 25 12:26:08 crc kubenswrapper[4688]: I1125 12:26:08.427398 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tkzmk" Nov 25 12:26:08 crc kubenswrapper[4688]: I1125 12:26:08.750616 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" event={"ID":"8e87d6d2-0104-4184-a6ca-8bc371a7e768","Type":"ContainerStarted","Data":"3535840fdc885cfa874d73c4217362bb96b73cc586a23a794dd9c2f5fbfb4dc9"} Nov 25 12:26:09 crc kubenswrapper[4688]: I1125 12:26:09.757832 4688 generic.go:334] "Generic (PLEG): container finished" podID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerID="3535840fdc885cfa874d73c4217362bb96b73cc586a23a794dd9c2f5fbfb4dc9" exitCode=0 Nov 25 12:26:09 crc kubenswrapper[4688]: I1125 12:26:09.757877 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" event={"ID":"8e87d6d2-0104-4184-a6ca-8bc371a7e768","Type":"ContainerDied","Data":"3535840fdc885cfa874d73c4217362bb96b73cc586a23a794dd9c2f5fbfb4dc9"} Nov 25 12:26:11 crc kubenswrapper[4688]: I1125 12:26:11.242639 4688 scope.go:117] "RemoveContainer" containerID="b6b2d964c8b260a393b7d9b6ee5949cc3f352550b963bce12edc06b94d241a37" Nov 25 12:26:11 crc kubenswrapper[4688]: I1125 12:26:11.772808 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xlfw5_6c3971fa-9838-436e-97b1-be050abea83a/kube-multus/2.log" Nov 25 12:26:12 crc kubenswrapper[4688]: I1125 12:26:12.783208 4688 generic.go:334] "Generic (PLEG): container finished" podID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerID="fd2cd4e57086a95f8ede59cdfbc557d3e96d3c41e2e5e30c21423307ef2a66b3" exitCode=0 Nov 25 12:26:12 crc kubenswrapper[4688]: I1125 12:26:12.783355 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" event={"ID":"8e87d6d2-0104-4184-a6ca-8bc371a7e768","Type":"ContainerDied","Data":"fd2cd4e57086a95f8ede59cdfbc557d3e96d3c41e2e5e30c21423307ef2a66b3"} Nov 25 12:26:13 crc kubenswrapper[4688]: I1125 12:26:13.792177 4688 generic.go:334] "Generic (PLEG): container finished" podID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerID="212f8b434c3eefc6877b121ef49fabbd9f5d40fd338e7efe6486a71b3f3b0df2" exitCode=0 Nov 25 12:26:13 crc kubenswrapper[4688]: I1125 12:26:13.792228 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" event={"ID":"8e87d6d2-0104-4184-a6ca-8bc371a7e768","Type":"ContainerDied","Data":"212f8b434c3eefc6877b121ef49fabbd9f5d40fd338e7efe6486a71b3f3b0df2"} Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.022128 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.209272 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqz48\" (UniqueName: \"kubernetes.io/projected/8e87d6d2-0104-4184-a6ca-8bc371a7e768-kube-api-access-xqz48\") pod \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.209343 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-util\") pod \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.209456 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-bundle\") pod \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\" (UID: \"8e87d6d2-0104-4184-a6ca-8bc371a7e768\") " Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.210326 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-bundle" (OuterVolumeSpecName: "bundle") pod "8e87d6d2-0104-4184-a6ca-8bc371a7e768" (UID: "8e87d6d2-0104-4184-a6ca-8bc371a7e768"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.214698 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e87d6d2-0104-4184-a6ca-8bc371a7e768-kube-api-access-xqz48" (OuterVolumeSpecName: "kube-api-access-xqz48") pod "8e87d6d2-0104-4184-a6ca-8bc371a7e768" (UID: "8e87d6d2-0104-4184-a6ca-8bc371a7e768"). InnerVolumeSpecName "kube-api-access-xqz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.221224 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-util" (OuterVolumeSpecName: "util") pod "8e87d6d2-0104-4184-a6ca-8bc371a7e768" (UID: "8e87d6d2-0104-4184-a6ca-8bc371a7e768"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.312113 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.312183 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqz48\" (UniqueName: \"kubernetes.io/projected/8e87d6d2-0104-4184-a6ca-8bc371a7e768-kube-api-access-xqz48\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.312207 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e87d6d2-0104-4184-a6ca-8bc371a7e768-util\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.805944 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" event={"ID":"8e87d6d2-0104-4184-a6ca-8bc371a7e768","Type":"ContainerDied","Data":"65c6271b566b7841780f03cf31f151fc936aac35df9504d8c180113afabbbaa5"} Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.806000 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c6271b566b7841780f03cf31f151fc936aac35df9504d8c180113afabbbaa5" Nov 25 12:26:15 crc kubenswrapper[4688]: I1125 12:26:15.806029 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.667458 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-l75hn"] Nov 25 12:26:18 crc kubenswrapper[4688]: E1125 12:26:18.668211 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerName="pull" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.668224 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerName="pull" Nov 25 12:26:18 crc kubenswrapper[4688]: E1125 12:26:18.668238 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerName="util" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.668244 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerName="util" Nov 25 12:26:18 crc kubenswrapper[4688]: E1125 12:26:18.668252 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerName="extract" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.668260 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerName="extract" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.668343 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e87d6d2-0104-4184-a6ca-8bc371a7e768" containerName="extract" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.668711 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-l75hn" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.673502 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.673764 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tbt79" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.673777 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.680747 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-l75hn"] Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.853239 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qdn7\" (UniqueName: \"kubernetes.io/projected/c08e5e0c-e882-43da-8211-ab86d099db71-kube-api-access-2qdn7\") pod \"nmstate-operator-557fdffb88-l75hn\" (UID: \"c08e5e0c-e882-43da-8211-ab86d099db71\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-l75hn" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.954512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qdn7\" (UniqueName: \"kubernetes.io/projected/c08e5e0c-e882-43da-8211-ab86d099db71-kube-api-access-2qdn7\") pod \"nmstate-operator-557fdffb88-l75hn\" (UID: \"c08e5e0c-e882-43da-8211-ab86d099db71\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-l75hn" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.975102 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qdn7\" (UniqueName: \"kubernetes.io/projected/c08e5e0c-e882-43da-8211-ab86d099db71-kube-api-access-2qdn7\") pod \"nmstate-operator-557fdffb88-l75hn\" (UID: \"c08e5e0c-e882-43da-8211-ab86d099db71\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-l75hn" Nov 25 12:26:18 crc kubenswrapper[4688]: I1125 12:26:18.982362 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-l75hn" Nov 25 12:26:19 crc kubenswrapper[4688]: I1125 12:26:19.177365 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-l75hn"] Nov 25 12:26:19 crc kubenswrapper[4688]: W1125 12:26:19.186925 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc08e5e0c_e882_43da_8211_ab86d099db71.slice/crio-2e78c301961318bd0871830aecdd9dfe8a6bb6c05ff32435ad972fb5c1662c84 WatchSource:0}: Error finding container 2e78c301961318bd0871830aecdd9dfe8a6bb6c05ff32435ad972fb5c1662c84: Status 404 returned error can't find the container with id 2e78c301961318bd0871830aecdd9dfe8a6bb6c05ff32435ad972fb5c1662c84 Nov 25 12:26:19 crc kubenswrapper[4688]: I1125 12:26:19.824878 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-l75hn" event={"ID":"c08e5e0c-e882-43da-8211-ab86d099db71","Type":"ContainerStarted","Data":"2e78c301961318bd0871830aecdd9dfe8a6bb6c05ff32435ad972fb5c1662c84"} Nov 25 12:26:21 crc kubenswrapper[4688]: I1125 12:26:21.834755 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-l75hn" event={"ID":"c08e5e0c-e882-43da-8211-ab86d099db71","Type":"ContainerStarted","Data":"d788a2a3f66be881f3952dc2af0cae3fa725066b108a4f109fdebade41054ec0"} Nov 25 12:26:21 crc kubenswrapper[4688]: I1125 12:26:21.852116 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-l75hn" podStartSLOduration=1.741400186 podStartE2EDuration="3.852094298s" podCreationTimestamp="2025-11-25 12:26:18 +0000 UTC" firstStartedPulling="2025-11-25 12:26:19.18992873 +0000 UTC m=+729.299557598" lastFinishedPulling="2025-11-25 12:26:21.300622842 +0000 UTC m=+731.410251710" observedRunningTime="2025-11-25 12:26:21.848647336 +0000 UTC m=+731.958276214" watchObservedRunningTime="2025-11-25 12:26:21.852094298 +0000 UTC m=+731.961723176" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.574018 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.575731 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.582029 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xh7dt" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.597071 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.597988 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.599905 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.604508 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.622063 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.626831 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vc899"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.627468 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.727990 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.729367 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.733198 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hgvg9" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.735793 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.742189 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.750196 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.769260 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/af18cbb6-5f3d-4fa6-914a-421fe283aa4e-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-xjfsz\" (UID: \"af18cbb6-5f3d-4fa6-914a-421fe283aa4e\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.769349 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwbq\" (UniqueName: \"kubernetes.io/projected/48a378b8-e17a-41c3-b612-a9c503dcbc58-kube-api-access-bvwbq\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.769399 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-ovs-socket\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.769566 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsp6\" (UniqueName: \"kubernetes.io/projected/af18cbb6-5f3d-4fa6-914a-421fe283aa4e-kube-api-access-nmsp6\") pod \"nmstate-webhook-6b89b748d8-xjfsz\" (UID: \"af18cbb6-5f3d-4fa6-914a-421fe283aa4e\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.769621 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpr4\" (UniqueName: \"kubernetes.io/projected/d63a65a2-b47a-49e4-8489-f7aee9d6929d-kube-api-access-zzpr4\") pod \"nmstate-metrics-5dcf9c57c5-8pxc2\" (UID: \"d63a65a2-b47a-49e4-8489-f7aee9d6929d\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.769714 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-nmstate-lock\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.769748 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-dbus-socket\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870572 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd5cb4b-20c0-4042-b348-001e8084c2f4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870622 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsp6\" (UniqueName: \"kubernetes.io/projected/af18cbb6-5f3d-4fa6-914a-421fe283aa4e-kube-api-access-nmsp6\") pod \"nmstate-webhook-6b89b748d8-xjfsz\" (UID: \"af18cbb6-5f3d-4fa6-914a-421fe283aa4e\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870640 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpr4\" (UniqueName: \"kubernetes.io/projected/d63a65a2-b47a-49e4-8489-f7aee9d6929d-kube-api-access-zzpr4\") pod \"nmstate-metrics-5dcf9c57c5-8pxc2\" (UID: \"d63a65a2-b47a-49e4-8489-f7aee9d6929d\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870681 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6bd5cb4b-20c0-4042-b348-001e8084c2f4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870801 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-nmstate-lock\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-dbus-socket\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870891 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86skk\" (UniqueName: \"kubernetes.io/projected/6bd5cb4b-20c0-4042-b348-001e8084c2f4-kube-api-access-86skk\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870928 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/af18cbb6-5f3d-4fa6-914a-421fe283aa4e-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-xjfsz\" (UID: \"af18cbb6-5f3d-4fa6-914a-421fe283aa4e\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.870955 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-nmstate-lock\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.871113 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwbq\" (UniqueName: \"kubernetes.io/projected/48a378b8-e17a-41c3-b612-a9c503dcbc58-kube-api-access-bvwbq\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.871163 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-dbus-socket\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.871239 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-ovs-socket\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.871378 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/48a378b8-e17a-41c3-b612-a9c503dcbc58-ovs-socket\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.885418 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/af18cbb6-5f3d-4fa6-914a-421fe283aa4e-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-xjfsz\" (UID: \"af18cbb6-5f3d-4fa6-914a-421fe283aa4e\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.890539 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwbq\" (UniqueName: \"kubernetes.io/projected/48a378b8-e17a-41c3-b612-a9c503dcbc58-kube-api-access-bvwbq\") pod \"nmstate-handler-vc899\" (UID: \"48a378b8-e17a-41c3-b612-a9c503dcbc58\") " pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.891222 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpr4\" (UniqueName: \"kubernetes.io/projected/d63a65a2-b47a-49e4-8489-f7aee9d6929d-kube-api-access-zzpr4\") pod \"nmstate-metrics-5dcf9c57c5-8pxc2\" (UID: \"d63a65a2-b47a-49e4-8489-f7aee9d6929d\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.891465 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsp6\" (UniqueName: \"kubernetes.io/projected/af18cbb6-5f3d-4fa6-914a-421fe283aa4e-kube-api-access-nmsp6\") pod \"nmstate-webhook-6b89b748d8-xjfsz\" (UID: \"af18cbb6-5f3d-4fa6-914a-421fe283aa4e\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.894642 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.914801 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.916543 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bd76bbbc5-r6p79"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.919106 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.940501 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bd76bbbc5-r6p79"] Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.945031 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.972266 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6bd5cb4b-20c0-4042-b348-001e8084c2f4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.972303 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86skk\" (UniqueName: \"kubernetes.io/projected/6bd5cb4b-20c0-4042-b348-001e8084c2f4-kube-api-access-86skk\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.972350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd5cb4b-20c0-4042-b348-001e8084c2f4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.973480 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6bd5cb4b-20c0-4042-b348-001e8084c2f4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.976103 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd5cb4b-20c0-4042-b348-001e8084c2f4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:27 crc kubenswrapper[4688]: I1125 12:26:27.992720 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86skk\" (UniqueName: \"kubernetes.io/projected/6bd5cb4b-20c0-4042-b348-001e8084c2f4-kube-api-access-86skk\") pod \"nmstate-console-plugin-5874bd7bc5-h8w7b\" (UID: \"6bd5cb4b-20c0-4042-b348-001e8084c2f4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.050148 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.073356 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-oauth-serving-cert\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.073855 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-service-ca\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.073876 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd39e806-61d3-43f9-b153-dd85603b88cb-console-serving-cert\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.073898 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z2gl\" (UniqueName: \"kubernetes.io/projected/fd39e806-61d3-43f9-b153-dd85603b88cb-kube-api-access-2z2gl\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.073965 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd39e806-61d3-43f9-b153-dd85603b88cb-console-oauth-config\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.073997 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-trusted-ca-bundle\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.074015 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-console-config\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.129240 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2"] Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.175965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-oauth-serving-cert\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.176028 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-service-ca\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.176055 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd39e806-61d3-43f9-b153-dd85603b88cb-console-serving-cert\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.176080 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z2gl\" (UniqueName: \"kubernetes.io/projected/fd39e806-61d3-43f9-b153-dd85603b88cb-kube-api-access-2z2gl\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.176117 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd39e806-61d3-43f9-b153-dd85603b88cb-console-oauth-config\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.176145 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-trusted-ca-bundle\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.176170 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-console-config\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.177468 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-oauth-serving-cert\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.177756 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-service-ca\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.177934 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-trusted-ca-bundle\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.178134 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd39e806-61d3-43f9-b153-dd85603b88cb-console-config\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.183320 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd39e806-61d3-43f9-b153-dd85603b88cb-console-serving-cert\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.184709 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd39e806-61d3-43f9-b153-dd85603b88cb-console-oauth-config\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.186080 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz"] Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.194391 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z2gl\" (UniqueName: \"kubernetes.io/projected/fd39e806-61d3-43f9-b153-dd85603b88cb-kube-api-access-2z2gl\") pod \"console-7bd76bbbc5-r6p79\" (UID: \"fd39e806-61d3-43f9-b153-dd85603b88cb\") " pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.262322 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b"] Nov 25 12:26:28 crc kubenswrapper[4688]: W1125 12:26:28.268531 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bd5cb4b_20c0_4042_b348_001e8084c2f4.slice/crio-e958d35bc28ca7710de80420ec0fc161517ae33c13cc7d0f11f30a99e1266ff9 WatchSource:0}: Error finding container e958d35bc28ca7710de80420ec0fc161517ae33c13cc7d0f11f30a99e1266ff9: Status 404 returned error can't find the container with id e958d35bc28ca7710de80420ec0fc161517ae33c13cc7d0f11f30a99e1266ff9 Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.286994 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.467579 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bd76bbbc5-r6p79"] Nov 25 12:26:28 crc kubenswrapper[4688]: W1125 12:26:28.472399 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd39e806_61d3_43f9_b153_dd85603b88cb.slice/crio-992d4ed5fd65f79876b99db7d6b0a243836d2cc45890512622c26966d934e409 WatchSource:0}: Error finding container 992d4ed5fd65f79876b99db7d6b0a243836d2cc45890512622c26966d934e409: Status 404 returned error can't find the container with id 992d4ed5fd65f79876b99db7d6b0a243836d2cc45890512622c26966d934e409 Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.873097 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bd76bbbc5-r6p79" event={"ID":"fd39e806-61d3-43f9-b153-dd85603b88cb","Type":"ContainerStarted","Data":"d28c1c92d003d6af177cac447f09f23b3cb7032466dd9300b575b518e6201753"} Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.873494 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bd76bbbc5-r6p79" event={"ID":"fd39e806-61d3-43f9-b153-dd85603b88cb","Type":"ContainerStarted","Data":"992d4ed5fd65f79876b99db7d6b0a243836d2cc45890512622c26966d934e409"} Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.874140 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" event={"ID":"af18cbb6-5f3d-4fa6-914a-421fe283aa4e","Type":"ContainerStarted","Data":"275c7509a4508b3a96cadc408e345b0212b33838ac8f4abeda1fba895037dc22"} Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.875507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" event={"ID":"d63a65a2-b47a-49e4-8489-f7aee9d6929d","Type":"ContainerStarted","Data":"c4b4ea3be8be0e111f1c30405b449f21e6189df944a039592cfd359fa3343ea2"} Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.877566 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" event={"ID":"6bd5cb4b-20c0-4042-b348-001e8084c2f4","Type":"ContainerStarted","Data":"e958d35bc28ca7710de80420ec0fc161517ae33c13cc7d0f11f30a99e1266ff9"} Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.878985 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vc899" event={"ID":"48a378b8-e17a-41c3-b612-a9c503dcbc58","Type":"ContainerStarted","Data":"0b0671ac6ba02627e3a8d770191e23b288cb7fd3ef6e35ab0b96e7094c42a81f"} Nov 25 12:26:28 crc kubenswrapper[4688]: I1125 12:26:28.895338 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bd76bbbc5-r6p79" podStartSLOduration=1.895319974 podStartE2EDuration="1.895319974s" podCreationTimestamp="2025-11-25 12:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:26:28.894049539 +0000 UTC m=+739.003678417" watchObservedRunningTime="2025-11-25 12:26:28.895319974 +0000 UTC m=+739.004948842" Nov 25 12:26:31 crc kubenswrapper[4688]: I1125 12:26:31.898434 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vc899" event={"ID":"48a378b8-e17a-41c3-b612-a9c503dcbc58","Type":"ContainerStarted","Data":"798f164add144ac0d5afd4a6679b2daeb70a73f7ffdd56df980daa20e6bc3f46"} Nov 25 12:26:31 crc kubenswrapper[4688]: I1125 12:26:31.898803 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:31 crc kubenswrapper[4688]: I1125 12:26:31.899979 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" event={"ID":"af18cbb6-5f3d-4fa6-914a-421fe283aa4e","Type":"ContainerStarted","Data":"612fbac07eff113c71bbf9d730d03560f9a59bbd1accdd954901848004f12139"} Nov 25 12:26:31 crc kubenswrapper[4688]: I1125 12:26:31.900374 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:26:31 crc kubenswrapper[4688]: I1125 12:26:31.901564 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" event={"ID":"d63a65a2-b47a-49e4-8489-f7aee9d6929d","Type":"ContainerStarted","Data":"bb8b52e7e64bdc77849841fb661283ff0e450d1ca23d573a6b01e963af5bdc9f"} Nov 25 12:26:31 crc kubenswrapper[4688]: I1125 12:26:31.902577 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" event={"ID":"6bd5cb4b-20c0-4042-b348-001e8084c2f4","Type":"ContainerStarted","Data":"0b786315186dadea45bed8f0001516d02f74600b55a56d05310409a2f37c5507"} Nov 25 12:26:31 crc kubenswrapper[4688]: I1125 12:26:31.935481 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vc899" podStartSLOduration=1.4605552689999999 podStartE2EDuration="4.935458711s" podCreationTimestamp="2025-11-25 12:26:27 +0000 UTC" firstStartedPulling="2025-11-25 12:26:27.969518858 +0000 UTC m=+738.079147726" lastFinishedPulling="2025-11-25 12:26:31.44442231 +0000 UTC m=+741.554051168" observedRunningTime="2025-11-25 12:26:31.926096519 +0000 UTC m=+742.035725387" watchObservedRunningTime="2025-11-25 12:26:31.935458711 +0000 UTC m=+742.045087579" Nov 25 12:26:31 crc kubenswrapper[4688]: I1125 12:26:31.963649 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" podStartSLOduration=1.728352772 podStartE2EDuration="4.963625848s" podCreationTimestamp="2025-11-25 12:26:27 +0000 UTC" firstStartedPulling="2025-11-25 12:26:28.209062602 +0000 UTC m=+738.318691470" lastFinishedPulling="2025-11-25 12:26:31.444335678 +0000 UTC m=+741.553964546" observedRunningTime="2025-11-25 12:26:31.961193693 +0000 UTC m=+742.070822561" watchObservedRunningTime="2025-11-25 12:26:31.963625848 +0000 UTC m=+742.073254726" Nov 25 12:26:32 crc kubenswrapper[4688]: I1125 12:26:32.010303 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h8w7b" podStartSLOduration=1.868171433 podStartE2EDuration="5.010281683s" podCreationTimestamp="2025-11-25 12:26:27 +0000 UTC" firstStartedPulling="2025-11-25 12:26:28.271358308 +0000 UTC m=+738.380987176" lastFinishedPulling="2025-11-25 12:26:31.413468558 +0000 UTC m=+741.523097426" observedRunningTime="2025-11-25 12:26:32.009945604 +0000 UTC m=+742.119574472" watchObservedRunningTime="2025-11-25 12:26:32.010281683 +0000 UTC m=+742.119910551" Nov 25 12:26:34 crc kubenswrapper[4688]: I1125 12:26:34.919750 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" event={"ID":"d63a65a2-b47a-49e4-8489-f7aee9d6929d","Type":"ContainerStarted","Data":"8debc9fe46405f8c054c22512eaa7dfd5dcebde9fcad003a7890292e18289095"} Nov 25 12:26:34 crc kubenswrapper[4688]: I1125 12:26:34.937235 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8pxc2" podStartSLOduration=1.975243404 podStartE2EDuration="7.937217504s" podCreationTimestamp="2025-11-25 12:26:27 +0000 UTC" firstStartedPulling="2025-11-25 12:26:28.155494551 +0000 UTC m=+738.265123419" lastFinishedPulling="2025-11-25 12:26:34.117468651 +0000 UTC m=+744.227097519" observedRunningTime="2025-11-25 12:26:34.933821192 +0000 UTC m=+745.043450060" watchObservedRunningTime="2025-11-25 12:26:34.937217504 +0000 UTC m=+745.046846372" Nov 25 12:26:37 crc kubenswrapper[4688]: I1125 12:26:37.979988 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vc899" Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.288244 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.288845 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.293423 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.584968 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7zpmk"] Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.585306 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" podUID="14c3d286-5003-4b44-81c6-220e491ba838" containerName="controller-manager" containerID="cri-o://244ffaad088c3c6e9535814d40eeb27f146db38e17ec4bc058529a617c3266b6" gracePeriod=30 Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.696622 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq"] Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.696888 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" podUID="c41361de-7942-4b97-97d8-9fd467394b25" containerName="route-controller-manager" containerID="cri-o://6de4778e098bb54032d08095cf2fc5f887f413c2277d95a1a644def6fc5ce1fd" gracePeriod=30 Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.945760 4688 generic.go:334] "Generic (PLEG): container finished" podID="c41361de-7942-4b97-97d8-9fd467394b25" containerID="6de4778e098bb54032d08095cf2fc5f887f413c2277d95a1a644def6fc5ce1fd" exitCode=0 Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.945810 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" event={"ID":"c41361de-7942-4b97-97d8-9fd467394b25","Type":"ContainerDied","Data":"6de4778e098bb54032d08095cf2fc5f887f413c2277d95a1a644def6fc5ce1fd"} Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.949139 4688 generic.go:334] "Generic (PLEG): container finished" podID="14c3d286-5003-4b44-81c6-220e491ba838" containerID="244ffaad088c3c6e9535814d40eeb27f146db38e17ec4bc058529a617c3266b6" exitCode=0 Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.949234 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" event={"ID":"14c3d286-5003-4b44-81c6-220e491ba838","Type":"ContainerDied","Data":"244ffaad088c3c6e9535814d40eeb27f146db38e17ec4bc058529a617c3266b6"} Nov 25 12:26:38 crc kubenswrapper[4688]: I1125 12:26:38.959188 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bd76bbbc5-r6p79" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.034397 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.046738 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xf6xd"] Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.116206 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149310 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djs2n\" (UniqueName: \"kubernetes.io/projected/c41361de-7942-4b97-97d8-9fd467394b25-kube-api-access-djs2n\") pod \"c41361de-7942-4b97-97d8-9fd467394b25\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149379 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-proxy-ca-bundles\") pod \"14c3d286-5003-4b44-81c6-220e491ba838\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c3d286-5003-4b44-81c6-220e491ba838-serving-cert\") pod \"14c3d286-5003-4b44-81c6-220e491ba838\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149458 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkfhq\" (UniqueName: \"kubernetes.io/projected/14c3d286-5003-4b44-81c6-220e491ba838-kube-api-access-gkfhq\") pod \"14c3d286-5003-4b44-81c6-220e491ba838\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149514 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-client-ca\") pod \"14c3d286-5003-4b44-81c6-220e491ba838\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149572 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41361de-7942-4b97-97d8-9fd467394b25-serving-cert\") pod \"c41361de-7942-4b97-97d8-9fd467394b25\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149650 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-config\") pod \"14c3d286-5003-4b44-81c6-220e491ba838\" (UID: \"14c3d286-5003-4b44-81c6-220e491ba838\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149699 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-client-ca\") pod \"c41361de-7942-4b97-97d8-9fd467394b25\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.149759 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-config\") pod \"c41361de-7942-4b97-97d8-9fd467394b25\" (UID: \"c41361de-7942-4b97-97d8-9fd467394b25\") " Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.151338 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-client-ca" (OuterVolumeSpecName: "client-ca") pod "c41361de-7942-4b97-97d8-9fd467394b25" (UID: "c41361de-7942-4b97-97d8-9fd467394b25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.151501 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-config" (OuterVolumeSpecName: "config") pod "c41361de-7942-4b97-97d8-9fd467394b25" (UID: "c41361de-7942-4b97-97d8-9fd467394b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.152294 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-client-ca" (OuterVolumeSpecName: "client-ca") pod "14c3d286-5003-4b44-81c6-220e491ba838" (UID: "14c3d286-5003-4b44-81c6-220e491ba838"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.156082 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-config" (OuterVolumeSpecName: "config") pod "14c3d286-5003-4b44-81c6-220e491ba838" (UID: "14c3d286-5003-4b44-81c6-220e491ba838"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.157853 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "14c3d286-5003-4b44-81c6-220e491ba838" (UID: "14c3d286-5003-4b44-81c6-220e491ba838"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.159116 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c3d286-5003-4b44-81c6-220e491ba838-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14c3d286-5003-4b44-81c6-220e491ba838" (UID: "14c3d286-5003-4b44-81c6-220e491ba838"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.159205 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41361de-7942-4b97-97d8-9fd467394b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c41361de-7942-4b97-97d8-9fd467394b25" (UID: "c41361de-7942-4b97-97d8-9fd467394b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.159167 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41361de-7942-4b97-97d8-9fd467394b25-kube-api-access-djs2n" (OuterVolumeSpecName: "kube-api-access-djs2n") pod "c41361de-7942-4b97-97d8-9fd467394b25" (UID: "c41361de-7942-4b97-97d8-9fd467394b25"). InnerVolumeSpecName "kube-api-access-djs2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.159712 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c3d286-5003-4b44-81c6-220e491ba838-kube-api-access-gkfhq" (OuterVolumeSpecName: "kube-api-access-gkfhq") pod "14c3d286-5003-4b44-81c6-220e491ba838" (UID: "14c3d286-5003-4b44-81c6-220e491ba838"). InnerVolumeSpecName "kube-api-access-gkfhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251806 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251844 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djs2n\" (UniqueName: \"kubernetes.io/projected/c41361de-7942-4b97-97d8-9fd467394b25-kube-api-access-djs2n\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251857 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251866 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c3d286-5003-4b44-81c6-220e491ba838-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251875 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkfhq\" (UniqueName: \"kubernetes.io/projected/14c3d286-5003-4b44-81c6-220e491ba838-kube-api-access-gkfhq\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251883 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251891 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41361de-7942-4b97-97d8-9fd467394b25-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251900 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c3d286-5003-4b44-81c6-220e491ba838-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.251908 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41361de-7942-4b97-97d8-9fd467394b25-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.836397 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx"] Nov 25 12:26:39 crc kubenswrapper[4688]: E1125 12:26:39.836920 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c3d286-5003-4b44-81c6-220e491ba838" containerName="controller-manager" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.836935 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c3d286-5003-4b44-81c6-220e491ba838" containerName="controller-manager" Nov 25 12:26:39 crc kubenswrapper[4688]: E1125 12:26:39.836955 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41361de-7942-4b97-97d8-9fd467394b25" containerName="route-controller-manager" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.836965 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41361de-7942-4b97-97d8-9fd467394b25" containerName="route-controller-manager" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.837075 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c3d286-5003-4b44-81c6-220e491ba838" containerName="controller-manager" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.837087 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41361de-7942-4b97-97d8-9fd467394b25" containerName="route-controller-manager" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.837467 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.840614 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cd6cf54-fmztr"] Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.841449 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.852561 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cd6cf54-fmztr"] Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.863936 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx"] Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.956372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" event={"ID":"14c3d286-5003-4b44-81c6-220e491ba838","Type":"ContainerDied","Data":"ea4209c2e9ab0751a22fc70acfee781e733793ec3e3662b33a15047ce29d870c"} Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.956660 4688 scope.go:117] "RemoveContainer" containerID="244ffaad088c3c6e9535814d40eeb27f146db38e17ec4bc058529a617c3266b6" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.956595 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7zpmk" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.958713 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" event={"ID":"c41361de-7942-4b97-97d8-9fd467394b25","Type":"ContainerDied","Data":"d295f9150ef02fa233ede57154926349031146520c3a2b795cfb888dead19dda"} Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.958789 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959380 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-client-ca\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959422 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bd405a-ddce-4411-b368-46a2289b9021-serving-cert\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959491 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-proxy-ca-bundles\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959542 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97xdd\" (UniqueName: \"kubernetes.io/projected/01bd405a-ddce-4411-b368-46a2289b9021-kube-api-access-97xdd\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959571 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-config\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959598 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-config\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959655 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b6dda0-7615-4850-b8a2-fc4190921ade-serving-cert\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959932 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-client-ca\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.959978 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6v7m\" (UniqueName: \"kubernetes.io/projected/84b6dda0-7615-4850-b8a2-fc4190921ade-kube-api-access-b6v7m\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.974518 4688 scope.go:117] "RemoveContainer" containerID="6de4778e098bb54032d08095cf2fc5f887f413c2277d95a1a644def6fc5ce1fd" Nov 25 12:26:39 crc kubenswrapper[4688]: I1125 12:26:39.992871 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7zpmk"] Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.000477 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7zpmk"] Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.006914 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq"] Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.010822 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-67wqq"] Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061384 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-client-ca\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061463 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bd405a-ddce-4411-b368-46a2289b9021-serving-cert\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061500 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-proxy-ca-bundles\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061568 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xdd\" (UniqueName: \"kubernetes.io/projected/01bd405a-ddce-4411-b368-46a2289b9021-kube-api-access-97xdd\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-config\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061621 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-config\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061652 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b6dda0-7615-4850-b8a2-fc4190921ade-serving-cert\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061695 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-client-ca\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.061741 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6v7m\" (UniqueName: \"kubernetes.io/projected/84b6dda0-7615-4850-b8a2-fc4190921ade-kube-api-access-b6v7m\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.062603 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-client-ca\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.063232 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-client-ca\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.063279 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-config\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.063328 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-config\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.064186 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84b6dda0-7615-4850-b8a2-fc4190921ade-proxy-ca-bundles\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.066276 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b6dda0-7615-4850-b8a2-fc4190921ade-serving-cert\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.067343 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bd405a-ddce-4411-b368-46a2289b9021-serving-cert\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.086477 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6v7m\" (UniqueName: \"kubernetes.io/projected/84b6dda0-7615-4850-b8a2-fc4190921ade-kube-api-access-b6v7m\") pod \"controller-manager-5cd6cf54-fmztr\" (UID: \"84b6dda0-7615-4850-b8a2-fc4190921ade\") " pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.086597 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xdd\" (UniqueName: \"kubernetes.io/projected/01bd405a-ddce-4411-b368-46a2289b9021-kube-api-access-97xdd\") pod \"route-controller-manager-5fccc7cfc7-dr5dx\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.165404 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx"] Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.165463 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.182813 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.428892 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cd6cf54-fmztr"] Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.510748 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx"] Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.749884 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c3d286-5003-4b44-81c6-220e491ba838" path="/var/lib/kubelet/pods/14c3d286-5003-4b44-81c6-220e491ba838/volumes" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.750793 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41361de-7942-4b97-97d8-9fd467394b25" path="/var/lib/kubelet/pods/c41361de-7942-4b97-97d8-9fd467394b25/volumes" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.966298 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" event={"ID":"84b6dda0-7615-4850-b8a2-fc4190921ade","Type":"ContainerStarted","Data":"58b3475955c21738598e2f42607fa560356707754771b6b087b81a8bc5cd3a9e"} Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.966397 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" event={"ID":"84b6dda0-7615-4850-b8a2-fc4190921ade","Type":"ContainerStarted","Data":"448ca7d0e3ee96c608e4c327c4d5a986b78ab0f0442ba22dce533a3cb7a7b398"} Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.966422 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.969049 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" event={"ID":"01bd405a-ddce-4411-b368-46a2289b9021","Type":"ContainerStarted","Data":"b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957"} Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.969075 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" event={"ID":"01bd405a-ddce-4411-b368-46a2289b9021","Type":"ContainerStarted","Data":"663956341a259b80288ca7339ac1295d24e90960e62279708cacd4819bbf94cd"} Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.969159 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" podUID="01bd405a-ddce-4411-b368-46a2289b9021" containerName="route-controller-manager" containerID="cri-o://b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957" gracePeriod=30 Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.969248 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.993042 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" Nov 25 12:26:40 crc kubenswrapper[4688]: I1125 12:26:40.997782 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cd6cf54-fmztr" podStartSLOduration=2.997760876 podStartE2EDuration="2.997760876s" podCreationTimestamp="2025-11-25 12:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:26:40.994164049 +0000 UTC m=+751.103792917" watchObservedRunningTime="2025-11-25 12:26:40.997760876 +0000 UTC m=+751.107389744" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.072335 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" podStartSLOduration=3.072318151 podStartE2EDuration="3.072318151s" podCreationTimestamp="2025-11-25 12:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:26:41.070728819 +0000 UTC m=+751.180357687" watchObservedRunningTime="2025-11-25 12:26:41.072318151 +0000 UTC m=+751.181947019" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.315022 4688 patch_prober.go:28] interesting pod/route-controller-manager-5fccc7cfc7-dr5dx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": read tcp 10.217.0.2:48084->10.217.0.46:8443: read: connection reset by peer" start-of-body= Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.315094 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" podUID="01bd405a-ddce-4411-b368-46a2289b9021" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": read tcp 10.217.0.2:48084->10.217.0.46:8443: read: connection reset by peer" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.601345 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5fccc7cfc7-dr5dx_01bd405a-ddce-4411-b368-46a2289b9021/route-controller-manager/0.log" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.602364 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.629512 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7"] Nov 25 12:26:41 crc kubenswrapper[4688]: E1125 12:26:41.629828 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bd405a-ddce-4411-b368-46a2289b9021" containerName="route-controller-manager" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.629991 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bd405a-ddce-4411-b368-46a2289b9021" containerName="route-controller-manager" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.630280 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bd405a-ddce-4411-b368-46a2289b9021" containerName="route-controller-manager" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.631311 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.639635 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7"] Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.686465 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-config\") pod \"01bd405a-ddce-4411-b368-46a2289b9021\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.686638 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bd405a-ddce-4411-b368-46a2289b9021-serving-cert\") pod \"01bd405a-ddce-4411-b368-46a2289b9021\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.686669 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97xdd\" (UniqueName: \"kubernetes.io/projected/01bd405a-ddce-4411-b368-46a2289b9021-kube-api-access-97xdd\") pod \"01bd405a-ddce-4411-b368-46a2289b9021\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.686704 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-client-ca\") pod \"01bd405a-ddce-4411-b368-46a2289b9021\" (UID: \"01bd405a-ddce-4411-b368-46a2289b9021\") " Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.686865 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj5vl\" (UniqueName: \"kubernetes.io/projected/d85be6ed-9a9b-46f4-b672-0d22017c4239-kube-api-access-lj5vl\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.686905 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85be6ed-9a9b-46f4-b672-0d22017c4239-config\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.686965 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85be6ed-9a9b-46f4-b672-0d22017c4239-serving-cert\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.687125 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85be6ed-9a9b-46f4-b672-0d22017c4239-client-ca\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.687500 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-config" (OuterVolumeSpecName: "config") pod "01bd405a-ddce-4411-b368-46a2289b9021" (UID: "01bd405a-ddce-4411-b368-46a2289b9021"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.687562 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-client-ca" (OuterVolumeSpecName: "client-ca") pod "01bd405a-ddce-4411-b368-46a2289b9021" (UID: "01bd405a-ddce-4411-b368-46a2289b9021"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.692334 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01bd405a-ddce-4411-b368-46a2289b9021-kube-api-access-97xdd" (OuterVolumeSpecName: "kube-api-access-97xdd") pod "01bd405a-ddce-4411-b368-46a2289b9021" (UID: "01bd405a-ddce-4411-b368-46a2289b9021"). InnerVolumeSpecName "kube-api-access-97xdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.693136 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bd405a-ddce-4411-b368-46a2289b9021-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01bd405a-ddce-4411-b368-46a2289b9021" (UID: "01bd405a-ddce-4411-b368-46a2289b9021"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.788859 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85be6ed-9a9b-46f4-b672-0d22017c4239-serving-cert\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.790748 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85be6ed-9a9b-46f4-b672-0d22017c4239-client-ca\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.790813 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj5vl\" (UniqueName: \"kubernetes.io/projected/d85be6ed-9a9b-46f4-b672-0d22017c4239-kube-api-access-lj5vl\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.790855 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85be6ed-9a9b-46f4-b672-0d22017c4239-config\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.790910 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bd405a-ddce-4411-b368-46a2289b9021-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.790925 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97xdd\" (UniqueName: \"kubernetes.io/projected/01bd405a-ddce-4411-b368-46a2289b9021-kube-api-access-97xdd\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.790940 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.790952 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bd405a-ddce-4411-b368-46a2289b9021-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.792247 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85be6ed-9a9b-46f4-b672-0d22017c4239-config\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.793136 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d85be6ed-9a9b-46f4-b672-0d22017c4239-serving-cert\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.793273 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d85be6ed-9a9b-46f4-b672-0d22017c4239-client-ca\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.819635 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj5vl\" (UniqueName: \"kubernetes.io/projected/d85be6ed-9a9b-46f4-b672-0d22017c4239-kube-api-access-lj5vl\") pod \"route-controller-manager-6dfcfc46db-p56q7\" (UID: \"d85be6ed-9a9b-46f4-b672-0d22017c4239\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.951128 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.977178 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5fccc7cfc7-dr5dx_01bd405a-ddce-4411-b368-46a2289b9021/route-controller-manager/0.log" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.977238 4688 generic.go:334] "Generic (PLEG): container finished" podID="01bd405a-ddce-4411-b368-46a2289b9021" containerID="b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957" exitCode=255 Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.977658 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.977682 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" event={"ID":"01bd405a-ddce-4411-b368-46a2289b9021","Type":"ContainerDied","Data":"b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957"} Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.977753 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx" event={"ID":"01bd405a-ddce-4411-b368-46a2289b9021","Type":"ContainerDied","Data":"663956341a259b80288ca7339ac1295d24e90960e62279708cacd4819bbf94cd"} Nov 25 12:26:41 crc kubenswrapper[4688]: I1125 12:26:41.977777 4688 scope.go:117] "RemoveContainer" containerID="b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957" Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.032423 4688 scope.go:117] "RemoveContainer" containerID="b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957" Nov 25 12:26:42 crc kubenswrapper[4688]: E1125 12:26:42.032832 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957\": container with ID starting with b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957 not found: ID does not exist" containerID="b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957" Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.032865 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957"} err="failed to get container status \"b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957\": rpc error: code = NotFound desc = could not find container \"b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957\": container with ID starting with b0b2154c3cdfa2ce3a087cb4e97a4e5d171fe5ddf2939b75661904a7fe257957 not found: ID does not exist" Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.041916 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx"] Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.045064 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fccc7cfc7-dr5dx"] Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.388993 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7"] Nov 25 12:26:42 crc kubenswrapper[4688]: W1125 12:26:42.395338 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd85be6ed_9a9b_46f4_b672_0d22017c4239.slice/crio-dce7a3aa193fa61569e3e56baf96d19f14f485ceb7dd17c5c773339a39eacc3a WatchSource:0}: Error finding container dce7a3aa193fa61569e3e56baf96d19f14f485ceb7dd17c5c773339a39eacc3a: Status 404 returned error can't find the container with id dce7a3aa193fa61569e3e56baf96d19f14f485ceb7dd17c5c773339a39eacc3a Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.749046 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01bd405a-ddce-4411-b368-46a2289b9021" path="/var/lib/kubelet/pods/01bd405a-ddce-4411-b368-46a2289b9021/volumes" Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.986488 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" event={"ID":"d85be6ed-9a9b-46f4-b672-0d22017c4239","Type":"ContainerStarted","Data":"d9abb4f5a40f5eaab31c1fba2d38d826ee40d533dd95ca6c252091cee9af413a"} Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.986582 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" event={"ID":"d85be6ed-9a9b-46f4-b672-0d22017c4239","Type":"ContainerStarted","Data":"dce7a3aa193fa61569e3e56baf96d19f14f485ceb7dd17c5c773339a39eacc3a"} Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.986840 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:42 crc kubenswrapper[4688]: I1125 12:26:42.991451 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" Nov 25 12:26:43 crc kubenswrapper[4688]: I1125 12:26:43.008690 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dfcfc46db-p56q7" podStartSLOduration=3.008669064 podStartE2EDuration="3.008669064s" podCreationTimestamp="2025-11-25 12:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:26:43.006410424 +0000 UTC m=+753.116039292" watchObservedRunningTime="2025-11-25 12:26:43.008669064 +0000 UTC m=+753.118297932" Nov 25 12:26:47 crc kubenswrapper[4688]: I1125 12:26:47.295919 4688 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 12:26:47 crc kubenswrapper[4688]: I1125 12:26:47.853706 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:26:47 crc kubenswrapper[4688]: I1125 12:26:47.853774 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:26:47 crc kubenswrapper[4688]: I1125 12:26:47.919646 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xjfsz" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.494244 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2"] Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.496030 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.498784 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.509781 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2"] Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.565396 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt66l\" (UniqueName: \"kubernetes.io/projected/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-kube-api-access-vt66l\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.565467 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.565518 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.667185 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.667333 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.667389 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt66l\" (UniqueName: \"kubernetes.io/projected/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-kube-api-access-vt66l\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.667887 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.667987 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.690073 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt66l\" (UniqueName: \"kubernetes.io/projected/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-kube-api-access-vt66l\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:01 crc kubenswrapper[4688]: I1125 12:27:01.816194 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:02 crc kubenswrapper[4688]: I1125 12:27:02.235272 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2"] Nov 25 12:27:03 crc kubenswrapper[4688]: I1125 12:27:03.108197 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" event={"ID":"9012fbba-8b92-4bbe-88ec-1ac46a53ce34","Type":"ContainerStarted","Data":"4efcd86f1254cdc9fd4bdccb712be6f1c4bfa0b63164c57aeedf1bcf8893bf3b"} Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.104324 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xf6xd" podUID="4888de7e-b0ae-4682-a404-545a9ba9cd82" containerName="console" containerID="cri-o://756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037" gracePeriod=15 Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.120018 4688 generic.go:334] "Generic (PLEG): container finished" podID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerID="bff533f66e85742f1481222f01415a82ba9a3782e83b0ae43e9ab56c8a4dc1ee" exitCode=0 Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.120084 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" event={"ID":"9012fbba-8b92-4bbe-88ec-1ac46a53ce34","Type":"ContainerDied","Data":"bff533f66e85742f1481222f01415a82ba9a3782e83b0ae43e9ab56c8a4dc1ee"} Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.560618 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xf6xd_4888de7e-b0ae-4682-a404-545a9ba9cd82/console/0.log" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.560707 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.606145 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-config\") pod \"4888de7e-b0ae-4682-a404-545a9ba9cd82\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.606199 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-trusted-ca-bundle\") pod \"4888de7e-b0ae-4682-a404-545a9ba9cd82\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.606226 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-serving-cert\") pod \"4888de7e-b0ae-4682-a404-545a9ba9cd82\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.606255 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-service-ca\") pod \"4888de7e-b0ae-4682-a404-545a9ba9cd82\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.606316 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-oauth-serving-cert\") pod \"4888de7e-b0ae-4682-a404-545a9ba9cd82\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.606332 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcmnh\" (UniqueName: \"kubernetes.io/projected/4888de7e-b0ae-4682-a404-545a9ba9cd82-kube-api-access-jcmnh\") pod \"4888de7e-b0ae-4682-a404-545a9ba9cd82\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.606353 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-oauth-config\") pod \"4888de7e-b0ae-4682-a404-545a9ba9cd82\" (UID: \"4888de7e-b0ae-4682-a404-545a9ba9cd82\") " Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.607416 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-config" (OuterVolumeSpecName: "console-config") pod "4888de7e-b0ae-4682-a404-545a9ba9cd82" (UID: "4888de7e-b0ae-4682-a404-545a9ba9cd82"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.607456 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-service-ca" (OuterVolumeSpecName: "service-ca") pod "4888de7e-b0ae-4682-a404-545a9ba9cd82" (UID: "4888de7e-b0ae-4682-a404-545a9ba9cd82"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.607477 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4888de7e-b0ae-4682-a404-545a9ba9cd82" (UID: "4888de7e-b0ae-4682-a404-545a9ba9cd82"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.608001 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4888de7e-b0ae-4682-a404-545a9ba9cd82" (UID: "4888de7e-b0ae-4682-a404-545a9ba9cd82"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.612201 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4888de7e-b0ae-4682-a404-545a9ba9cd82" (UID: "4888de7e-b0ae-4682-a404-545a9ba9cd82"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.613925 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4888de7e-b0ae-4682-a404-545a9ba9cd82" (UID: "4888de7e-b0ae-4682-a404-545a9ba9cd82"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.615262 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4888de7e-b0ae-4682-a404-545a9ba9cd82-kube-api-access-jcmnh" (OuterVolumeSpecName: "kube-api-access-jcmnh") pod "4888de7e-b0ae-4682-a404-545a9ba9cd82" (UID: "4888de7e-b0ae-4682-a404-545a9ba9cd82"). InnerVolumeSpecName "kube-api-access-jcmnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.708008 4688 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.708045 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.708057 4688 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.708069 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.708160 4688 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4888de7e-b0ae-4682-a404-545a9ba9cd82-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.708173 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcmnh\" (UniqueName: \"kubernetes.io/projected/4888de7e-b0ae-4682-a404-545a9ba9cd82-kube-api-access-jcmnh\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:04 crc kubenswrapper[4688]: I1125 12:27:04.708185 4688 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4888de7e-b0ae-4682-a404-545a9ba9cd82-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.049157 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbb2t"] Nov 25 12:27:05 crc kubenswrapper[4688]: E1125 12:27:05.049654 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4888de7e-b0ae-4682-a404-545a9ba9cd82" containerName="console" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.049739 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4888de7e-b0ae-4682-a404-545a9ba9cd82" containerName="console" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.049994 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4888de7e-b0ae-4682-a404-545a9ba9cd82" containerName="console" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.051015 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.069818 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbb2t"] Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.113465 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-utilities\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.113594 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-catalog-content\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.113640 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj27s\" (UniqueName: \"kubernetes.io/projected/77a46884-8951-48a4-8266-0da4f446c122-kube-api-access-vj27s\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.126963 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xf6xd_4888de7e-b0ae-4682-a404-545a9ba9cd82/console/0.log" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.127015 4688 generic.go:334] "Generic (PLEG): container finished" podID="4888de7e-b0ae-4682-a404-545a9ba9cd82" containerID="756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037" exitCode=2 Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.127044 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xf6xd" event={"ID":"4888de7e-b0ae-4682-a404-545a9ba9cd82","Type":"ContainerDied","Data":"756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037"} Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.127070 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xf6xd" event={"ID":"4888de7e-b0ae-4682-a404-545a9ba9cd82","Type":"ContainerDied","Data":"851594b7798968c125f5226aeab9f9e361e1b470d7ba8cd0c4334950d6d29d09"} Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.127085 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xf6xd" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.127092 4688 scope.go:117] "RemoveContainer" containerID="756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.143833 4688 scope.go:117] "RemoveContainer" containerID="756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037" Nov 25 12:27:05 crc kubenswrapper[4688]: E1125 12:27:05.144395 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037\": container with ID starting with 756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037 not found: ID does not exist" containerID="756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.144429 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037"} err="failed to get container status \"756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037\": rpc error: code = NotFound desc = could not find container \"756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037\": container with ID starting with 756948ddb6fec25d938ffc9b0f533480191ded448378fce174d7234d6d119037 not found: ID does not exist" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.148791 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xf6xd"] Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.154282 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xf6xd"] Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.215024 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-utilities\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.215084 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-catalog-content\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.215105 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj27s\" (UniqueName: \"kubernetes.io/projected/77a46884-8951-48a4-8266-0da4f446c122-kube-api-access-vj27s\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.215928 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-catalog-content\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.216067 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-utilities\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.242795 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj27s\" (UniqueName: \"kubernetes.io/projected/77a46884-8951-48a4-8266-0da4f446c122-kube-api-access-vj27s\") pod \"redhat-operators-mbb2t\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.375366 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:05 crc kubenswrapper[4688]: I1125 12:27:05.781820 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbb2t"] Nov 25 12:27:06 crc kubenswrapper[4688]: I1125 12:27:06.133408 4688 generic.go:334] "Generic (PLEG): container finished" podID="77a46884-8951-48a4-8266-0da4f446c122" containerID="600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3" exitCode=0 Nov 25 12:27:06 crc kubenswrapper[4688]: I1125 12:27:06.133446 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbb2t" event={"ID":"77a46884-8951-48a4-8266-0da4f446c122","Type":"ContainerDied","Data":"600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3"} Nov 25 12:27:06 crc kubenswrapper[4688]: I1125 12:27:06.133492 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbb2t" event={"ID":"77a46884-8951-48a4-8266-0da4f446c122","Type":"ContainerStarted","Data":"d73e294ca1e7653294f4e2b7f9a89c949ace9b81da599b061c0cd050f4364791"} Nov 25 12:27:06 crc kubenswrapper[4688]: I1125 12:27:06.747360 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4888de7e-b0ae-4682-a404-545a9ba9cd82" path="/var/lib/kubelet/pods/4888de7e-b0ae-4682-a404-545a9ba9cd82/volumes" Nov 25 12:27:09 crc kubenswrapper[4688]: I1125 12:27:09.156183 4688 generic.go:334] "Generic (PLEG): container finished" podID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerID="511a94b06b75a61054178564c06c057d9725d4c0f9b8fe1510c7694851cea69f" exitCode=0 Nov 25 12:27:09 crc kubenswrapper[4688]: I1125 12:27:09.156257 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" event={"ID":"9012fbba-8b92-4bbe-88ec-1ac46a53ce34","Type":"ContainerDied","Data":"511a94b06b75a61054178564c06c057d9725d4c0f9b8fe1510c7694851cea69f"} Nov 25 12:27:09 crc kubenswrapper[4688]: I1125 12:27:09.161362 4688 generic.go:334] "Generic (PLEG): container finished" podID="77a46884-8951-48a4-8266-0da4f446c122" containerID="2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1" exitCode=0 Nov 25 12:27:09 crc kubenswrapper[4688]: I1125 12:27:09.161413 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbb2t" event={"ID":"77a46884-8951-48a4-8266-0da4f446c122","Type":"ContainerDied","Data":"2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1"} Nov 25 12:27:10 crc kubenswrapper[4688]: I1125 12:27:10.169727 4688 generic.go:334] "Generic (PLEG): container finished" podID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerID="372d7cc961c4e562894d36698c477215465717df86528b2635834839c022cad9" exitCode=0 Nov 25 12:27:10 crc kubenswrapper[4688]: I1125 12:27:10.170099 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" event={"ID":"9012fbba-8b92-4bbe-88ec-1ac46a53ce34","Type":"ContainerDied","Data":"372d7cc961c4e562894d36698c477215465717df86528b2635834839c022cad9"} Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.179298 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbb2t" event={"ID":"77a46884-8951-48a4-8266-0da4f446c122","Type":"ContainerStarted","Data":"e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254"} Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.213639 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mbb2t" podStartSLOduration=2.181443479 podStartE2EDuration="6.213594013s" podCreationTimestamp="2025-11-25 12:27:05 +0000 UTC" firstStartedPulling="2025-11-25 12:27:06.135183388 +0000 UTC m=+776.244812256" lastFinishedPulling="2025-11-25 12:27:10.167333922 +0000 UTC m=+780.276962790" observedRunningTime="2025-11-25 12:27:11.209486163 +0000 UTC m=+781.319115031" watchObservedRunningTime="2025-11-25 12:27:11.213594013 +0000 UTC m=+781.323222891" Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.547198 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.609900 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-util\") pod \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.609968 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-bundle\") pod \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.610008 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt66l\" (UniqueName: \"kubernetes.io/projected/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-kube-api-access-vt66l\") pod \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\" (UID: \"9012fbba-8b92-4bbe-88ec-1ac46a53ce34\") " Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.610962 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-bundle" (OuterVolumeSpecName: "bundle") pod "9012fbba-8b92-4bbe-88ec-1ac46a53ce34" (UID: "9012fbba-8b92-4bbe-88ec-1ac46a53ce34"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.615031 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-kube-api-access-vt66l" (OuterVolumeSpecName: "kube-api-access-vt66l") pod "9012fbba-8b92-4bbe-88ec-1ac46a53ce34" (UID: "9012fbba-8b92-4bbe-88ec-1ac46a53ce34"). InnerVolumeSpecName "kube-api-access-vt66l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.620770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-util" (OuterVolumeSpecName: "util") pod "9012fbba-8b92-4bbe-88ec-1ac46a53ce34" (UID: "9012fbba-8b92-4bbe-88ec-1ac46a53ce34"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.711129 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-util\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.711180 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:11 crc kubenswrapper[4688]: I1125 12:27:11.711190 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt66l\" (UniqueName: \"kubernetes.io/projected/9012fbba-8b92-4bbe-88ec-1ac46a53ce34-kube-api-access-vt66l\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:12 crc kubenswrapper[4688]: I1125 12:27:12.191925 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" event={"ID":"9012fbba-8b92-4bbe-88ec-1ac46a53ce34","Type":"ContainerDied","Data":"4efcd86f1254cdc9fd4bdccb712be6f1c4bfa0b63164c57aeedf1bcf8893bf3b"} Nov 25 12:27:12 crc kubenswrapper[4688]: I1125 12:27:12.192214 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4efcd86f1254cdc9fd4bdccb712be6f1c4bfa0b63164c57aeedf1bcf8893bf3b" Nov 25 12:27:12 crc kubenswrapper[4688]: I1125 12:27:12.191959 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2" Nov 25 12:27:15 crc kubenswrapper[4688]: I1125 12:27:15.376027 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:15 crc kubenswrapper[4688]: I1125 12:27:15.376314 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:15 crc kubenswrapper[4688]: I1125 12:27:15.423484 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:16 crc kubenswrapper[4688]: I1125 12:27:16.256178 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:17 crc kubenswrapper[4688]: I1125 12:27:17.435105 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbb2t"] Nov 25 12:27:17 crc kubenswrapper[4688]: I1125 12:27:17.853678 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:27:17 crc kubenswrapper[4688]: I1125 12:27:17.853728 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.218956 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mbb2t" podUID="77a46884-8951-48a4-8266-0da4f446c122" containerName="registry-server" containerID="cri-o://e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254" gracePeriod=2 Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.590247 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.697584 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj27s\" (UniqueName: \"kubernetes.io/projected/77a46884-8951-48a4-8266-0da4f446c122-kube-api-access-vj27s\") pod \"77a46884-8951-48a4-8266-0da4f446c122\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.697677 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-catalog-content\") pod \"77a46884-8951-48a4-8266-0da4f446c122\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.697757 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-utilities\") pod \"77a46884-8951-48a4-8266-0da4f446c122\" (UID: \"77a46884-8951-48a4-8266-0da4f446c122\") " Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.698728 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-utilities" (OuterVolumeSpecName: "utilities") pod "77a46884-8951-48a4-8266-0da4f446c122" (UID: "77a46884-8951-48a4-8266-0da4f446c122"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.704800 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a46884-8951-48a4-8266-0da4f446c122-kube-api-access-vj27s" (OuterVolumeSpecName: "kube-api-access-vj27s") pod "77a46884-8951-48a4-8266-0da4f446c122" (UID: "77a46884-8951-48a4-8266-0da4f446c122"). InnerVolumeSpecName "kube-api-access-vj27s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.787817 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77a46884-8951-48a4-8266-0da4f446c122" (UID: "77a46884-8951-48a4-8266-0da4f446c122"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.798946 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.799001 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77a46884-8951-48a4-8266-0da4f446c122-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:18 crc kubenswrapper[4688]: I1125 12:27:18.799014 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj27s\" (UniqueName: \"kubernetes.io/projected/77a46884-8951-48a4-8266-0da4f446c122-kube-api-access-vj27s\") on node \"crc\" DevicePath \"\"" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.225218 4688 generic.go:334] "Generic (PLEG): container finished" podID="77a46884-8951-48a4-8266-0da4f446c122" containerID="e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254" exitCode=0 Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.225251 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbb2t" event={"ID":"77a46884-8951-48a4-8266-0da4f446c122","Type":"ContainerDied","Data":"e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254"} Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.225298 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbb2t" event={"ID":"77a46884-8951-48a4-8266-0da4f446c122","Type":"ContainerDied","Data":"d73e294ca1e7653294f4e2b7f9a89c949ace9b81da599b061c0cd050f4364791"} Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.225318 4688 scope.go:117] "RemoveContainer" containerID="e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.225322 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbb2t" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.246543 4688 scope.go:117] "RemoveContainer" containerID="2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.261956 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbb2t"] Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.266858 4688 scope.go:117] "RemoveContainer" containerID="600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.269907 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mbb2t"] Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.281742 4688 scope.go:117] "RemoveContainer" containerID="e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254" Nov 25 12:27:19 crc kubenswrapper[4688]: E1125 12:27:19.282357 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254\": container with ID starting with e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254 not found: ID does not exist" containerID="e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.282410 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254"} err="failed to get container status \"e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254\": rpc error: code = NotFound desc = could not find container \"e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254\": container with ID starting with e94e172284dd341e58141414b22834123099233f5b43a412def0dbea897d6254 not found: ID does not exist" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.282442 4688 scope.go:117] "RemoveContainer" containerID="2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1" Nov 25 12:27:19 crc kubenswrapper[4688]: E1125 12:27:19.283009 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1\": container with ID starting with 2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1 not found: ID does not exist" containerID="2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.283056 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1"} err="failed to get container status \"2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1\": rpc error: code = NotFound desc = could not find container \"2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1\": container with ID starting with 2c718f99ee85929cbf98ef67c13bce0376687e00f4f3091dc147b05b033050c1 not found: ID does not exist" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.283092 4688 scope.go:117] "RemoveContainer" containerID="600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3" Nov 25 12:27:19 crc kubenswrapper[4688]: E1125 12:27:19.283559 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3\": container with ID starting with 600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3 not found: ID does not exist" containerID="600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3" Nov 25 12:27:19 crc kubenswrapper[4688]: I1125 12:27:19.283592 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3"} err="failed to get container status \"600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3\": rpc error: code = NotFound desc = could not find container \"600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3\": container with ID starting with 600017bc434d7c46047e718134622c2d8441353fa88a129204c2077f5bff53e3 not found: ID does not exist" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.751911 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a46884-8951-48a4-8266-0da4f446c122" path="/var/lib/kubelet/pods/77a46884-8951-48a4-8266-0da4f446c122/volumes" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984046 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m"] Nov 25 12:27:20 crc kubenswrapper[4688]: E1125 12:27:20.984298 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerName="pull" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984320 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerName="pull" Nov 25 12:27:20 crc kubenswrapper[4688]: E1125 12:27:20.984340 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a46884-8951-48a4-8266-0da4f446c122" containerName="extract-content" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984348 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a46884-8951-48a4-8266-0da4f446c122" containerName="extract-content" Nov 25 12:27:20 crc kubenswrapper[4688]: E1125 12:27:20.984365 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerName="extract" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984371 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerName="extract" Nov 25 12:27:20 crc kubenswrapper[4688]: E1125 12:27:20.984387 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a46884-8951-48a4-8266-0da4f446c122" containerName="extract-utilities" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984395 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a46884-8951-48a4-8266-0da4f446c122" containerName="extract-utilities" Nov 25 12:27:20 crc kubenswrapper[4688]: E1125 12:27:20.984411 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a46884-8951-48a4-8266-0da4f446c122" containerName="registry-server" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984418 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a46884-8951-48a4-8266-0da4f446c122" containerName="registry-server" Nov 25 12:27:20 crc kubenswrapper[4688]: E1125 12:27:20.984428 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerName="util" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984435 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerName="util" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984620 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9012fbba-8b92-4bbe-88ec-1ac46a53ce34" containerName="extract" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.984636 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a46884-8951-48a4-8266-0da4f446c122" containerName="registry-server" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.985101 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.986636 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9vktg" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.986854 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.987560 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.990512 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.990533 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 12:27:20 crc kubenswrapper[4688]: I1125 12:27:20.998022 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m"] Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.126751 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-apiservice-cert\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.127129 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-webhook-cert\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.127179 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27gnj\" (UniqueName: \"kubernetes.io/projected/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-kube-api-access-27gnj\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.228942 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-webhook-cert\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.229004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27gnj\" (UniqueName: \"kubernetes.io/projected/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-kube-api-access-27gnj\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.229072 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-apiservice-cert\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.234036 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-apiservice-cert\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.234780 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-webhook-cert\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.279049 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27gnj\" (UniqueName: \"kubernetes.io/projected/d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3-kube-api-access-27gnj\") pod \"metallb-operator-controller-manager-744bc4ddc8-58c5m\" (UID: \"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3\") " pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.301538 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.451776 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l"] Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.452480 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.460830 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.461033 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qzhjq" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.461713 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.492502 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l"] Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.535641 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29f74196-0858-470b-8d69-2a8c67753827-apiservice-cert\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.535971 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29f74196-0858-470b-8d69-2a8c67753827-webhook-cert\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.536013 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r8bn\" (UniqueName: \"kubernetes.io/projected/29f74196-0858-470b-8d69-2a8c67753827-kube-api-access-8r8bn\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.636936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29f74196-0858-470b-8d69-2a8c67753827-apiservice-cert\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.636986 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29f74196-0858-470b-8d69-2a8c67753827-webhook-cert\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.637026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r8bn\" (UniqueName: \"kubernetes.io/projected/29f74196-0858-470b-8d69-2a8c67753827-kube-api-access-8r8bn\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.643729 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29f74196-0858-470b-8d69-2a8c67753827-webhook-cert\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.644259 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29f74196-0858-470b-8d69-2a8c67753827-apiservice-cert\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.655488 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r8bn\" (UniqueName: \"kubernetes.io/projected/29f74196-0858-470b-8d69-2a8c67753827-kube-api-access-8r8bn\") pod \"metallb-operator-webhook-server-94654dbc4-7h22l\" (UID: \"29f74196-0858-470b-8d69-2a8c67753827\") " pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.785208 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:21 crc kubenswrapper[4688]: I1125 12:27:21.810906 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m"] Nov 25 12:27:21 crc kubenswrapper[4688]: W1125 12:27:21.823152 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a9c5_3ebd_49be_84d0_eb05c2b8e7b3.slice/crio-d771c92ea5e2b1701952171c86d2493bf6ef08ece44e71a46cf10cefa661fe14 WatchSource:0}: Error finding container d771c92ea5e2b1701952171c86d2493bf6ef08ece44e71a46cf10cefa661fe14: Status 404 returned error can't find the container with id d771c92ea5e2b1701952171c86d2493bf6ef08ece44e71a46cf10cefa661fe14 Nov 25 12:27:22 crc kubenswrapper[4688]: I1125 12:27:22.204025 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l"] Nov 25 12:27:22 crc kubenswrapper[4688]: W1125 12:27:22.208058 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29f74196_0858_470b_8d69_2a8c67753827.slice/crio-e4587791d3d4aa745c2038713d0428b281d9e3efd489ee15f877f51a30ed04a2 WatchSource:0}: Error finding container e4587791d3d4aa745c2038713d0428b281d9e3efd489ee15f877f51a30ed04a2: Status 404 returned error can't find the container with id e4587791d3d4aa745c2038713d0428b281d9e3efd489ee15f877f51a30ed04a2 Nov 25 12:27:22 crc kubenswrapper[4688]: I1125 12:27:22.244870 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" event={"ID":"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3","Type":"ContainerStarted","Data":"d771c92ea5e2b1701952171c86d2493bf6ef08ece44e71a46cf10cefa661fe14"} Nov 25 12:27:22 crc kubenswrapper[4688]: I1125 12:27:22.245869 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" event={"ID":"29f74196-0858-470b-8d69-2a8c67753827","Type":"ContainerStarted","Data":"e4587791d3d4aa745c2038713d0428b281d9e3efd489ee15f877f51a30ed04a2"} Nov 25 12:27:28 crc kubenswrapper[4688]: I1125 12:27:28.296415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" event={"ID":"29f74196-0858-470b-8d69-2a8c67753827","Type":"ContainerStarted","Data":"e0a079495c30d03b868590b54f6e7f65aa41b47dcbf0d2bb002920b78a6b1b9e"} Nov 25 12:27:28 crc kubenswrapper[4688]: I1125 12:27:28.296997 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:28 crc kubenswrapper[4688]: I1125 12:27:28.298482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" event={"ID":"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3","Type":"ContainerStarted","Data":"1cc79231e22c48243f924a28ac4a1809fc2e31145a2ca4c333619082ed800858"} Nov 25 12:27:28 crc kubenswrapper[4688]: I1125 12:27:28.298883 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:27:28 crc kubenswrapper[4688]: I1125 12:27:28.323132 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" podStartSLOduration=1.673533081 podStartE2EDuration="7.323116735s" podCreationTimestamp="2025-11-25 12:27:21 +0000 UTC" firstStartedPulling="2025-11-25 12:27:22.211320676 +0000 UTC m=+792.320949544" lastFinishedPulling="2025-11-25 12:27:27.86090433 +0000 UTC m=+797.970533198" observedRunningTime="2025-11-25 12:27:28.320906637 +0000 UTC m=+798.430535515" watchObservedRunningTime="2025-11-25 12:27:28.323116735 +0000 UTC m=+798.432745603" Nov 25 12:27:28 crc kubenswrapper[4688]: I1125 12:27:28.352280 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" podStartSLOduration=2.7273625900000003 podStartE2EDuration="8.35226127s" podCreationTimestamp="2025-11-25 12:27:20 +0000 UTC" firstStartedPulling="2025-11-25 12:27:21.825437786 +0000 UTC m=+791.935066654" lastFinishedPulling="2025-11-25 12:27:27.450336466 +0000 UTC m=+797.559965334" observedRunningTime="2025-11-25 12:27:28.351333896 +0000 UTC m=+798.460962764" watchObservedRunningTime="2025-11-25 12:27:28.35226127 +0000 UTC m=+798.461890148" Nov 25 12:27:41 crc kubenswrapper[4688]: I1125 12:27:41.790308 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-94654dbc4-7h22l" Nov 25 12:27:47 crc kubenswrapper[4688]: I1125 12:27:47.854233 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:27:47 crc kubenswrapper[4688]: I1125 12:27:47.854889 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:27:47 crc kubenswrapper[4688]: I1125 12:27:47.854946 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:27:47 crc kubenswrapper[4688]: I1125 12:27:47.855657 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8fce2e1ba2b0b0a8ccd7a9e7c79c4f46ac3a4e41d62d29310173c9c94b065de"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:27:47 crc kubenswrapper[4688]: I1125 12:27:47.855723 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://f8fce2e1ba2b0b0a8ccd7a9e7c79c4f46ac3a4e41d62d29310173c9c94b065de" gracePeriod=600 Nov 25 12:27:48 crc kubenswrapper[4688]: I1125 12:27:48.423589 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="f8fce2e1ba2b0b0a8ccd7a9e7c79c4f46ac3a4e41d62d29310173c9c94b065de" exitCode=0 Nov 25 12:27:48 crc kubenswrapper[4688]: I1125 12:27:48.424017 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"f8fce2e1ba2b0b0a8ccd7a9e7c79c4f46ac3a4e41d62d29310173c9c94b065de"} Nov 25 12:27:48 crc kubenswrapper[4688]: I1125 12:27:48.424054 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"bd4c77f22f04f95d12c0a6e31890a8c2be94485d18b032708ee7f7a088bd619a"} Nov 25 12:27:48 crc kubenswrapper[4688]: I1125 12:27:48.424076 4688 scope.go:117] "RemoveContainer" containerID="fe2ba0924be4215985e9fa9117124142232ec7fc1bf1aff1c7218dd864800a1d" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.672953 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pc74g"] Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.677701 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.690971 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc74g"] Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.702844 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-utilities\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.702892 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-catalog-content\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.703181 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp5tm\" (UniqueName: \"kubernetes.io/projected/598ca33c-09c1-44ec-a195-e47097d41bb1-kube-api-access-tp5tm\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.804967 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-utilities\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.805029 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-catalog-content\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.805153 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp5tm\" (UniqueName: \"kubernetes.io/projected/598ca33c-09c1-44ec-a195-e47097d41bb1-kube-api-access-tp5tm\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.805664 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-catalog-content\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.806559 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-utilities\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.827954 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp5tm\" (UniqueName: \"kubernetes.io/projected/598ca33c-09c1-44ec-a195-e47097d41bb1-kube-api-access-tp5tm\") pod \"community-operators-pc74g\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:55 crc kubenswrapper[4688]: I1125 12:27:55.996829 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:27:56 crc kubenswrapper[4688]: I1125 12:27:56.507083 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc74g"] Nov 25 12:27:57 crc kubenswrapper[4688]: I1125 12:27:57.487492 4688 generic.go:334] "Generic (PLEG): container finished" podID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerID="3aed970505239a614d1a38e745d73e0b95bd389f4382355a5533c9991d0566c9" exitCode=0 Nov 25 12:27:57 crc kubenswrapper[4688]: I1125 12:27:57.487602 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc74g" event={"ID":"598ca33c-09c1-44ec-a195-e47097d41bb1","Type":"ContainerDied","Data":"3aed970505239a614d1a38e745d73e0b95bd389f4382355a5533c9991d0566c9"} Nov 25 12:27:57 crc kubenswrapper[4688]: I1125 12:27:57.488048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc74g" event={"ID":"598ca33c-09c1-44ec-a195-e47097d41bb1","Type":"ContainerStarted","Data":"1c01b8b053368ab6425ef2af5a5d08386eadeed5041830e623a8d1b04b9d96d1"} Nov 25 12:27:58 crc kubenswrapper[4688]: I1125 12:27:58.495744 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc74g" event={"ID":"598ca33c-09c1-44ec-a195-e47097d41bb1","Type":"ContainerStarted","Data":"6b3b2bb2a29180d1efea7845744580c3212dcf7455614b9a2537292106a69ec5"} Nov 25 12:27:59 crc kubenswrapper[4688]: I1125 12:27:59.503106 4688 generic.go:334] "Generic (PLEG): container finished" podID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerID="6b3b2bb2a29180d1efea7845744580c3212dcf7455614b9a2537292106a69ec5" exitCode=0 Nov 25 12:27:59 crc kubenswrapper[4688]: I1125 12:27:59.503148 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc74g" event={"ID":"598ca33c-09c1-44ec-a195-e47097d41bb1","Type":"ContainerDied","Data":"6b3b2bb2a29180d1efea7845744580c3212dcf7455614b9a2537292106a69ec5"} Nov 25 12:28:00 crc kubenswrapper[4688]: I1125 12:28:00.509942 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc74g" event={"ID":"598ca33c-09c1-44ec-a195-e47097d41bb1","Type":"ContainerStarted","Data":"4e555ae121d0899a1ef186d87cc61815f7c839ee543308dd8300724db8d3b7e1"} Nov 25 12:28:00 crc kubenswrapper[4688]: I1125 12:28:00.530699 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pc74g" podStartSLOduration=3.092339339 podStartE2EDuration="5.53067616s" podCreationTimestamp="2025-11-25 12:27:55 +0000 UTC" firstStartedPulling="2025-11-25 12:27:57.49033655 +0000 UTC m=+827.599965408" lastFinishedPulling="2025-11-25 12:27:59.928673361 +0000 UTC m=+830.038302229" observedRunningTime="2025-11-25 12:28:00.526276882 +0000 UTC m=+830.635905770" watchObservedRunningTime="2025-11-25 12:28:00.53067616 +0000 UTC m=+830.640305028" Nov 25 12:28:01 crc kubenswrapper[4688]: I1125 12:28:01.306029 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.029693 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5rbtk"] Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.032541 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.033607 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9"] Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.034328 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.034968 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.035292 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7xlf2" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.035289 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.035438 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.050503 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9"] Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.087624 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-metrics-certs\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.087955 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49tfk\" (UniqueName: \"kubernetes.io/projected/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-kube-api-access-49tfk\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.088079 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8a77cbc-9814-4996-9ee3-d1e63f581842-cert\") pod \"frr-k8s-webhook-server-6998585d5-bdmv9\" (UID: \"a8a77cbc-9814-4996-9ee3-d1e63f581842\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.088184 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-metrics\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.088260 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-conf\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.088410 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-startup\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.088553 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-reloader\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.088660 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-sockets\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.088758 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbxt\" (UniqueName: \"kubernetes.io/projected/a8a77cbc-9814-4996-9ee3-d1e63f581842-kube-api-access-brbxt\") pod \"frr-k8s-webhook-server-6998585d5-bdmv9\" (UID: \"a8a77cbc-9814-4996-9ee3-d1e63f581842\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.140791 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lthrr"] Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.142150 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.143809 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9kv58" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.144083 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-8kfck"] Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.144344 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.144978 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.145298 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.147388 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.148013 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.170285 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-8kfck"] Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.189541 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6cm9\" (UniqueName: \"kubernetes.io/projected/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-kube-api-access-q6cm9\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.189603 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-metrics-certs\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.189624 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-cert\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.189648 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49tfk\" (UniqueName: \"kubernetes.io/projected/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-kube-api-access-49tfk\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.189666 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8a77cbc-9814-4996-9ee3-d1e63f581842-cert\") pod \"frr-k8s-webhook-server-6998585d5-bdmv9\" (UID: \"a8a77cbc-9814-4996-9ee3-d1e63f581842\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.189687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190259 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-metrics\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190408 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-conf\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-startup\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190638 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-reloader\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190751 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metallb-excludel2\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190859 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnxp\" (UniqueName: \"kubernetes.io/projected/e48219fb-3aae-42ff-8dec-3d952e97aff1-kube-api-access-clnxp\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190974 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-sockets\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.191098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbxt\" (UniqueName: \"kubernetes.io/projected/a8a77cbc-9814-4996-9ee3-d1e63f581842-kube-api-access-brbxt\") pod \"frr-k8s-webhook-server-6998585d5-bdmv9\" (UID: \"a8a77cbc-9814-4996-9ee3-d1e63f581842\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.191215 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-sockets\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-conf\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190757 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-metrics\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.190988 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-reloader\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.191566 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-frr-startup\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.191672 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-metrics-certs\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.191785 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metrics-certs\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.197292 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-metrics-certs\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.197577 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8a77cbc-9814-4996-9ee3-d1e63f581842-cert\") pod \"frr-k8s-webhook-server-6998585d5-bdmv9\" (UID: \"a8a77cbc-9814-4996-9ee3-d1e63f581842\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.207862 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49tfk\" (UniqueName: \"kubernetes.io/projected/6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4-kube-api-access-49tfk\") pod \"frr-k8s-5rbtk\" (UID: \"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4\") " pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.210186 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbxt\" (UniqueName: \"kubernetes.io/projected/a8a77cbc-9814-4996-9ee3-d1e63f581842-kube-api-access-brbxt\") pod \"frr-k8s-webhook-server-6998585d5-bdmv9\" (UID: \"a8a77cbc-9814-4996-9ee3-d1e63f581842\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.293866 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnxp\" (UniqueName: \"kubernetes.io/projected/e48219fb-3aae-42ff-8dec-3d952e97aff1-kube-api-access-clnxp\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.294258 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metallb-excludel2\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.294449 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metrics-certs\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.294591 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6cm9\" (UniqueName: \"kubernetes.io/projected/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-kube-api-access-q6cm9\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: E1125 12:28:02.294671 4688 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 25 12:28:02 crc kubenswrapper[4688]: E1125 12:28:02.294760 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metrics-certs podName:4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7 nodeName:}" failed. No retries permitted until 2025-11-25 12:28:02.794737708 +0000 UTC m=+832.904366576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metrics-certs") pod "speaker-lthrr" (UID: "4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7") : secret "speaker-certs-secret" not found Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.294698 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-metrics-certs\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: E1125 12:28:02.294911 4688 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 25 12:28:02 crc kubenswrapper[4688]: E1125 12:28:02.294970 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-metrics-certs podName:e48219fb-3aae-42ff-8dec-3d952e97aff1 nodeName:}" failed. No retries permitted until 2025-11-25 12:28:02.794953014 +0000 UTC m=+832.904581982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-metrics-certs") pod "controller-6c7b4b5f48-8kfck" (UID: "e48219fb-3aae-42ff-8dec-3d952e97aff1") : secret "controller-certs-secret" not found Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.295042 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-cert\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.295150 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: E1125 12:28:02.295319 4688 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 12:28:02 crc kubenswrapper[4688]: E1125 12:28:02.295435 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist podName:4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7 nodeName:}" failed. No retries permitted until 2025-11-25 12:28:02.795421977 +0000 UTC m=+832.905050845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist") pod "speaker-lthrr" (UID: "4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7") : secret "metallb-memberlist" not found Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.295060 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metallb-excludel2\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.298296 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-cert\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.319443 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6cm9\" (UniqueName: \"kubernetes.io/projected/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-kube-api-access-q6cm9\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.321948 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnxp\" (UniqueName: \"kubernetes.io/projected/e48219fb-3aae-42ff-8dec-3d952e97aff1-kube-api-access-clnxp\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.353936 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.362798 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.525325 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerStarted","Data":"4ef9346cf0872ebca5072bff0512490be80be854b941d232630a5d2c96b33863"} Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.786175 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9"] Nov 25 12:28:02 crc kubenswrapper[4688]: W1125 12:28:02.791397 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8a77cbc_9814_4996_9ee3_d1e63f581842.slice/crio-45c5f733d0fb63f92d116e35badb4ca25a55488497453fd66211ff07eda5b0a7 WatchSource:0}: Error finding container 45c5f733d0fb63f92d116e35badb4ca25a55488497453fd66211ff07eda5b0a7: Status 404 returned error can't find the container with id 45c5f733d0fb63f92d116e35badb4ca25a55488497453fd66211ff07eda5b0a7 Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.800828 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metrics-certs\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.800900 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-metrics-certs\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.800923 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: E1125 12:28:02.801883 4688 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 12:28:02 crc kubenswrapper[4688]: E1125 12:28:02.801967 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist podName:4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7 nodeName:}" failed. No retries permitted until 2025-11-25 12:28:03.801948335 +0000 UTC m=+833.911577203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist") pod "speaker-lthrr" (UID: "4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7") : secret "metallb-memberlist" not found Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.807935 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-metrics-certs\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:02 crc kubenswrapper[4688]: I1125 12:28:02.808362 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48219fb-3aae-42ff-8dec-3d952e97aff1-metrics-certs\") pod \"controller-6c7b4b5f48-8kfck\" (UID: \"e48219fb-3aae-42ff-8dec-3d952e97aff1\") " pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:03 crc kubenswrapper[4688]: I1125 12:28:03.085382 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:03 crc kubenswrapper[4688]: I1125 12:28:03.472900 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-8kfck"] Nov 25 12:28:03 crc kubenswrapper[4688]: I1125 12:28:03.533702 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" event={"ID":"a8a77cbc-9814-4996-9ee3-d1e63f581842","Type":"ContainerStarted","Data":"45c5f733d0fb63f92d116e35badb4ca25a55488497453fd66211ff07eda5b0a7"} Nov 25 12:28:03 crc kubenswrapper[4688]: I1125 12:28:03.534933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8kfck" event={"ID":"e48219fb-3aae-42ff-8dec-3d952e97aff1","Type":"ContainerStarted","Data":"8b6706b076cf1aeaa21b382c86c5667e2728e21635792e205fdae168f75b198e"} Nov 25 12:28:03 crc kubenswrapper[4688]: I1125 12:28:03.824968 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:03 crc kubenswrapper[4688]: I1125 12:28:03.832096 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7-memberlist\") pod \"speaker-lthrr\" (UID: \"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7\") " pod="metallb-system/speaker-lthrr" Nov 25 12:28:03 crc kubenswrapper[4688]: I1125 12:28:03.967485 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lthrr" Nov 25 12:28:04 crc kubenswrapper[4688]: W1125 12:28:04.003907 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eab9fc3_d9f4_4d77_ba40_1ef9bd4bddd7.slice/crio-8824781e16e4ebc86cbc57df895fa108d09036e2fab94b962269fc712360b236 WatchSource:0}: Error finding container 8824781e16e4ebc86cbc57df895fa108d09036e2fab94b962269fc712360b236: Status 404 returned error can't find the container with id 8824781e16e4ebc86cbc57df895fa108d09036e2fab94b962269fc712360b236 Nov 25 12:28:04 crc kubenswrapper[4688]: I1125 12:28:04.545470 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lthrr" event={"ID":"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7","Type":"ContainerStarted","Data":"dce10b5e7080f2b109d8e608d06000dc084b7973ec338244db59b80d932e26cf"} Nov 25 12:28:04 crc kubenswrapper[4688]: I1125 12:28:04.546096 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lthrr" event={"ID":"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7","Type":"ContainerStarted","Data":"8824781e16e4ebc86cbc57df895fa108d09036e2fab94b962269fc712360b236"} Nov 25 12:28:04 crc kubenswrapper[4688]: I1125 12:28:04.559082 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8kfck" event={"ID":"e48219fb-3aae-42ff-8dec-3d952e97aff1","Type":"ContainerStarted","Data":"6c48dd63a08537f1e85a2d6372881c48e5bb785093b22a5ca16fab470d3663ee"} Nov 25 12:28:04 crc kubenswrapper[4688]: I1125 12:28:04.559155 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8kfck" event={"ID":"e48219fb-3aae-42ff-8dec-3d952e97aff1","Type":"ContainerStarted","Data":"5d9ad70e56b269e90c294ebb80c576111571ed305f4ce517629aa6e9cc7ebf43"} Nov 25 12:28:04 crc kubenswrapper[4688]: I1125 12:28:04.559195 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:04 crc kubenswrapper[4688]: I1125 12:28:04.579930 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-8kfck" podStartSLOduration=2.579911616 podStartE2EDuration="2.579911616s" podCreationTimestamp="2025-11-25 12:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:28:04.5782183 +0000 UTC m=+834.687847178" watchObservedRunningTime="2025-11-25 12:28:04.579911616 +0000 UTC m=+834.689540484" Nov 25 12:28:05 crc kubenswrapper[4688]: I1125 12:28:05.572018 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lthrr" event={"ID":"4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7","Type":"ContainerStarted","Data":"640f3145b092631e349461de9343cc267237b69cde995fb796b5807c52a4734c"} Nov 25 12:28:05 crc kubenswrapper[4688]: I1125 12:28:05.572195 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lthrr" Nov 25 12:28:05 crc kubenswrapper[4688]: I1125 12:28:05.590339 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lthrr" podStartSLOduration=3.590317941 podStartE2EDuration="3.590317941s" podCreationTimestamp="2025-11-25 12:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:28:05.586082107 +0000 UTC m=+835.695710975" watchObservedRunningTime="2025-11-25 12:28:05.590317941 +0000 UTC m=+835.699946809" Nov 25 12:28:05 crc kubenswrapper[4688]: I1125 12:28:05.997711 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:28:05 crc kubenswrapper[4688]: I1125 12:28:05.998185 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:28:06 crc kubenswrapper[4688]: I1125 12:28:06.064985 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:28:06 crc kubenswrapper[4688]: I1125 12:28:06.633363 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:28:06 crc kubenswrapper[4688]: I1125 12:28:06.692673 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pc74g"] Nov 25 12:28:08 crc kubenswrapper[4688]: I1125 12:28:08.600715 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pc74g" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerName="registry-server" containerID="cri-o://4e555ae121d0899a1ef186d87cc61815f7c839ee543308dd8300724db8d3b7e1" gracePeriod=2 Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.304602 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2r9dn"] Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.305920 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.319034 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r9dn"] Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.426918 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-utilities\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.426986 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlz9z\" (UniqueName: \"kubernetes.io/projected/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-kube-api-access-hlz9z\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.427140 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-catalog-content\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.528793 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlz9z\" (UniqueName: \"kubernetes.io/projected/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-kube-api-access-hlz9z\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.528855 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-catalog-content\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.528938 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-utilities\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.529583 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-utilities\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.529695 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-catalog-content\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.566839 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlz9z\" (UniqueName: \"kubernetes.io/projected/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-kube-api-access-hlz9z\") pod \"redhat-marketplace-2r9dn\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.612586 4688 generic.go:334] "Generic (PLEG): container finished" podID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerID="4e555ae121d0899a1ef186d87cc61815f7c839ee543308dd8300724db8d3b7e1" exitCode=0 Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.612651 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc74g" event={"ID":"598ca33c-09c1-44ec-a195-e47097d41bb1","Type":"ContainerDied","Data":"4e555ae121d0899a1ef186d87cc61815f7c839ee543308dd8300724db8d3b7e1"} Nov 25 12:28:09 crc kubenswrapper[4688]: I1125 12:28:09.644640 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.263009 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.308018 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9w5t"] Nov 25 12:28:10 crc kubenswrapper[4688]: E1125 12:28:10.308567 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerName="registry-server" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.308585 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerName="registry-server" Nov 25 12:28:10 crc kubenswrapper[4688]: E1125 12:28:10.308601 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerName="extract-utilities" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.308610 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerName="extract-utilities" Nov 25 12:28:10 crc kubenswrapper[4688]: E1125 12:28:10.308621 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerName="extract-content" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.308628 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerName="extract-content" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.308765 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" containerName="registry-server" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.309687 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.319259 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9w5t"] Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.344380 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-utilities\") pod \"598ca33c-09c1-44ec-a195-e47097d41bb1\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.344490 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp5tm\" (UniqueName: \"kubernetes.io/projected/598ca33c-09c1-44ec-a195-e47097d41bb1-kube-api-access-tp5tm\") pod \"598ca33c-09c1-44ec-a195-e47097d41bb1\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.344570 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-catalog-content\") pod \"598ca33c-09c1-44ec-a195-e47097d41bb1\" (UID: \"598ca33c-09c1-44ec-a195-e47097d41bb1\") " Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.346432 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-utilities" (OuterVolumeSpecName: "utilities") pod "598ca33c-09c1-44ec-a195-e47097d41bb1" (UID: "598ca33c-09c1-44ec-a195-e47097d41bb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.352493 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598ca33c-09c1-44ec-a195-e47097d41bb1-kube-api-access-tp5tm" (OuterVolumeSpecName: "kube-api-access-tp5tm") pod "598ca33c-09c1-44ec-a195-e47097d41bb1" (UID: "598ca33c-09c1-44ec-a195-e47097d41bb1"). InnerVolumeSpecName "kube-api-access-tp5tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.409542 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "598ca33c-09c1-44ec-a195-e47097d41bb1" (UID: "598ca33c-09c1-44ec-a195-e47097d41bb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.446940 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-catalog-content\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.447181 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7rc\" (UniqueName: \"kubernetes.io/projected/d88165cf-f348-4918-a623-814e0d4cdabe-kube-api-access-bh7rc\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.447253 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-utilities\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.447406 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.447429 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp5tm\" (UniqueName: \"kubernetes.io/projected/598ca33c-09c1-44ec-a195-e47097d41bb1-kube-api-access-tp5tm\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.447441 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598ca33c-09c1-44ec-a195-e47097d41bb1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.482508 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r9dn"] Nov 25 12:28:10 crc kubenswrapper[4688]: W1125 12:28:10.490073 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8fc317_d49d_4c40_b68b_696dd4dd96bd.slice/crio-89cff5e9e3c3ae1cc57c9de40f814a8d87e3c4cb93cd5b74bd0fbe9eb90f2f9b WatchSource:0}: Error finding container 89cff5e9e3c3ae1cc57c9de40f814a8d87e3c4cb93cd5b74bd0fbe9eb90f2f9b: Status 404 returned error can't find the container with id 89cff5e9e3c3ae1cc57c9de40f814a8d87e3c4cb93cd5b74bd0fbe9eb90f2f9b Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.548887 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-catalog-content\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.549002 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7rc\" (UniqueName: \"kubernetes.io/projected/d88165cf-f348-4918-a623-814e0d4cdabe-kube-api-access-bh7rc\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.549033 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-utilities\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.549689 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-utilities\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.549851 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-catalog-content\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.568709 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7rc\" (UniqueName: \"kubernetes.io/projected/d88165cf-f348-4918-a623-814e0d4cdabe-kube-api-access-bh7rc\") pod \"certified-operators-r9w5t\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.623471 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r9dn" event={"ID":"3e8fc317-d49d-4c40-b68b-696dd4dd96bd","Type":"ContainerStarted","Data":"89cff5e9e3c3ae1cc57c9de40f814a8d87e3c4cb93cd5b74bd0fbe9eb90f2f9b"} Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.627629 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" event={"ID":"a8a77cbc-9814-4996-9ee3-d1e63f581842","Type":"ContainerStarted","Data":"0e97a66ddc54c63a6ecc1ed5b4199cd41d954d4de2e44348379ceab157f899ba"} Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.627778 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.629055 4688 generic.go:334] "Generic (PLEG): container finished" podID="6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4" containerID="4293edebb6c9031084c845f2a8cd963f554733dbfa8e304825e280c08136e5f9" exitCode=0 Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.629113 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerDied","Data":"4293edebb6c9031084c845f2a8cd963f554733dbfa8e304825e280c08136e5f9"} Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.632881 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc74g" event={"ID":"598ca33c-09c1-44ec-a195-e47097d41bb1","Type":"ContainerDied","Data":"1c01b8b053368ab6425ef2af5a5d08386eadeed5041830e623a8d1b04b9d96d1"} Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.632938 4688 scope.go:117] "RemoveContainer" containerID="4e555ae121d0899a1ef186d87cc61815f7c839ee543308dd8300724db8d3b7e1" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.633139 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc74g" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.643950 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.646667 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" podStartSLOduration=1.299307756 podStartE2EDuration="8.646649783s" podCreationTimestamp="2025-11-25 12:28:02 +0000 UTC" firstStartedPulling="2025-11-25 12:28:02.793714603 +0000 UTC m=+832.903343471" lastFinishedPulling="2025-11-25 12:28:10.14105663 +0000 UTC m=+840.250685498" observedRunningTime="2025-11-25 12:28:10.645188873 +0000 UTC m=+840.754817741" watchObservedRunningTime="2025-11-25 12:28:10.646649783 +0000 UTC m=+840.756278651" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.659332 4688 scope.go:117] "RemoveContainer" containerID="6b3b2bb2a29180d1efea7845744580c3212dcf7455614b9a2537292106a69ec5" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.674909 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pc74g"] Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.692756 4688 scope.go:117] "RemoveContainer" containerID="3aed970505239a614d1a38e745d73e0b95bd389f4382355a5533c9991d0566c9" Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.703165 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pc74g"] Nov 25 12:28:10 crc kubenswrapper[4688]: I1125 12:28:10.750457 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598ca33c-09c1-44ec-a195-e47097d41bb1" path="/var/lib/kubelet/pods/598ca33c-09c1-44ec-a195-e47097d41bb1/volumes" Nov 25 12:28:10 crc kubenswrapper[4688]: E1125 12:28:10.812924 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod598ca33c_09c1_44ec_a195_e47097d41bb1.slice/crio-1c01b8b053368ab6425ef2af5a5d08386eadeed5041830e623a8d1b04b9d96d1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod598ca33c_09c1_44ec_a195_e47097d41bb1.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:28:11 crc kubenswrapper[4688]: I1125 12:28:11.172569 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9w5t"] Nov 25 12:28:11 crc kubenswrapper[4688]: W1125 12:28:11.208030 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd88165cf_f348_4918_a623_814e0d4cdabe.slice/crio-13b66ddd581f18c9752e6c56e15c6441ff91930be9e02227693ac99e8f9809ab WatchSource:0}: Error finding container 13b66ddd581f18c9752e6c56e15c6441ff91930be9e02227693ac99e8f9809ab: Status 404 returned error can't find the container with id 13b66ddd581f18c9752e6c56e15c6441ff91930be9e02227693ac99e8f9809ab Nov 25 12:28:11 crc kubenswrapper[4688]: I1125 12:28:11.639982 4688 generic.go:334] "Generic (PLEG): container finished" podID="d88165cf-f348-4918-a623-814e0d4cdabe" containerID="2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547" exitCode=0 Nov 25 12:28:11 crc kubenswrapper[4688]: I1125 12:28:11.640054 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9w5t" event={"ID":"d88165cf-f348-4918-a623-814e0d4cdabe","Type":"ContainerDied","Data":"2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547"} Nov 25 12:28:11 crc kubenswrapper[4688]: I1125 12:28:11.640086 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9w5t" event={"ID":"d88165cf-f348-4918-a623-814e0d4cdabe","Type":"ContainerStarted","Data":"13b66ddd581f18c9752e6c56e15c6441ff91930be9e02227693ac99e8f9809ab"} Nov 25 12:28:11 crc kubenswrapper[4688]: I1125 12:28:11.642284 4688 generic.go:334] "Generic (PLEG): container finished" podID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerID="8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3" exitCode=0 Nov 25 12:28:11 crc kubenswrapper[4688]: I1125 12:28:11.642375 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r9dn" event={"ID":"3e8fc317-d49d-4c40-b68b-696dd4dd96bd","Type":"ContainerDied","Data":"8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3"} Nov 25 12:28:12 crc kubenswrapper[4688]: I1125 12:28:12.651302 4688 generic.go:334] "Generic (PLEG): container finished" podID="6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4" containerID="e61934f72a7f837280f2af0eb710e21a7f49f2862a573e431ba849da4240dde6" exitCode=0 Nov 25 12:28:12 crc kubenswrapper[4688]: I1125 12:28:12.651357 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerDied","Data":"e61934f72a7f837280f2af0eb710e21a7f49f2862a573e431ba849da4240dde6"} Nov 25 12:28:12 crc kubenswrapper[4688]: I1125 12:28:12.655000 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9w5t" event={"ID":"d88165cf-f348-4918-a623-814e0d4cdabe","Type":"ContainerStarted","Data":"f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce"} Nov 25 12:28:12 crc kubenswrapper[4688]: I1125 12:28:12.657718 4688 generic.go:334] "Generic (PLEG): container finished" podID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerID="97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525" exitCode=0 Nov 25 12:28:12 crc kubenswrapper[4688]: I1125 12:28:12.657943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r9dn" event={"ID":"3e8fc317-d49d-4c40-b68b-696dd4dd96bd","Type":"ContainerDied","Data":"97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525"} Nov 25 12:28:13 crc kubenswrapper[4688]: I1125 12:28:13.090814 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-8kfck" Nov 25 12:28:13 crc kubenswrapper[4688]: I1125 12:28:13.667004 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r9dn" event={"ID":"3e8fc317-d49d-4c40-b68b-696dd4dd96bd","Type":"ContainerStarted","Data":"56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f"} Nov 25 12:28:13 crc kubenswrapper[4688]: I1125 12:28:13.669863 4688 generic.go:334] "Generic (PLEG): container finished" podID="6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4" containerID="5e4f08cce4785643bd70a931fb246f14f3e7f84857584e0058c9f01f7f9ac266" exitCode=0 Nov 25 12:28:13 crc kubenswrapper[4688]: I1125 12:28:13.669922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerDied","Data":"5e4f08cce4785643bd70a931fb246f14f3e7f84857584e0058c9f01f7f9ac266"} Nov 25 12:28:13 crc kubenswrapper[4688]: I1125 12:28:13.673335 4688 generic.go:334] "Generic (PLEG): container finished" podID="d88165cf-f348-4918-a623-814e0d4cdabe" containerID="f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce" exitCode=0 Nov 25 12:28:13 crc kubenswrapper[4688]: I1125 12:28:13.673415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9w5t" event={"ID":"d88165cf-f348-4918-a623-814e0d4cdabe","Type":"ContainerDied","Data":"f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce"} Nov 25 12:28:13 crc kubenswrapper[4688]: I1125 12:28:13.699035 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2r9dn" podStartSLOduration=3.298605651 podStartE2EDuration="4.699019207s" podCreationTimestamp="2025-11-25 12:28:09 +0000 UTC" firstStartedPulling="2025-11-25 12:28:11.643917975 +0000 UTC m=+841.753546843" lastFinishedPulling="2025-11-25 12:28:13.044331531 +0000 UTC m=+843.153960399" observedRunningTime="2025-11-25 12:28:13.69797254 +0000 UTC m=+843.807601408" watchObservedRunningTime="2025-11-25 12:28:13.699019207 +0000 UTC m=+843.808648075" Nov 25 12:28:14 crc kubenswrapper[4688]: I1125 12:28:14.684176 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9w5t" event={"ID":"d88165cf-f348-4918-a623-814e0d4cdabe","Type":"ContainerStarted","Data":"0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9"} Nov 25 12:28:14 crc kubenswrapper[4688]: I1125 12:28:14.697415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerStarted","Data":"c09f245bfbfe848bd23c605f40acadec3eba0e24544fd72c0f3eb6f8f3a06516"} Nov 25 12:28:14 crc kubenswrapper[4688]: I1125 12:28:14.697503 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerStarted","Data":"aa1e16ed9d9f110fe13db666c3c7c9999bc8e2586656ae90d3fe30d3c845598c"} Nov 25 12:28:14 crc kubenswrapper[4688]: I1125 12:28:14.697515 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerStarted","Data":"f3fd94afced5dfaad324f2b8d67103156fdb5863e6d69f1e687f753409a33e2d"} Nov 25 12:28:15 crc kubenswrapper[4688]: I1125 12:28:15.709366 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerStarted","Data":"ba7cb202da133b9f977515f5fc45f862bdf2162fb3476ffa25cfae3a18d9890e"} Nov 25 12:28:15 crc kubenswrapper[4688]: I1125 12:28:15.709862 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerStarted","Data":"57d5b66cdb57d7c25a2ac32928b2229426bef759cd8ca398e30260b1d8854269"} Nov 25 12:28:15 crc kubenswrapper[4688]: I1125 12:28:15.730509 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9w5t" podStartSLOduration=3.2894377390000002 podStartE2EDuration="5.730479794s" podCreationTimestamp="2025-11-25 12:28:10 +0000 UTC" firstStartedPulling="2025-11-25 12:28:11.642057394 +0000 UTC m=+841.751686262" lastFinishedPulling="2025-11-25 12:28:14.083099439 +0000 UTC m=+844.192728317" observedRunningTime="2025-11-25 12:28:15.727260828 +0000 UTC m=+845.836889696" watchObservedRunningTime="2025-11-25 12:28:15.730479794 +0000 UTC m=+845.840108662" Nov 25 12:28:16 crc kubenswrapper[4688]: I1125 12:28:16.724969 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rbtk" event={"ID":"6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4","Type":"ContainerStarted","Data":"5d7767846af2520174b351f3736a5cc9257bfbf3cbd9ca76c0eb09020942f341"} Nov 25 12:28:16 crc kubenswrapper[4688]: I1125 12:28:16.725245 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:16 crc kubenswrapper[4688]: I1125 12:28:16.757165 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5rbtk" podStartSLOduration=7.161244717 podStartE2EDuration="14.757139708s" podCreationTimestamp="2025-11-25 12:28:02 +0000 UTC" firstStartedPulling="2025-11-25 12:28:02.513304492 +0000 UTC m=+832.622933360" lastFinishedPulling="2025-11-25 12:28:10.109199483 +0000 UTC m=+840.218828351" observedRunningTime="2025-11-25 12:28:16.754570809 +0000 UTC m=+846.864199697" watchObservedRunningTime="2025-11-25 12:28:16.757139708 +0000 UTC m=+846.866768576" Nov 25 12:28:17 crc kubenswrapper[4688]: I1125 12:28:17.355274 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:17 crc kubenswrapper[4688]: I1125 12:28:17.404439 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:19 crc kubenswrapper[4688]: I1125 12:28:19.644952 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:19 crc kubenswrapper[4688]: I1125 12:28:19.645837 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:19 crc kubenswrapper[4688]: I1125 12:28:19.688188 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:19 crc kubenswrapper[4688]: I1125 12:28:19.791082 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:20 crc kubenswrapper[4688]: I1125 12:28:20.098508 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r9dn"] Nov 25 12:28:20 crc kubenswrapper[4688]: I1125 12:28:20.644741 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:20 crc kubenswrapper[4688]: I1125 12:28:20.644821 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:20 crc kubenswrapper[4688]: I1125 12:28:20.689021 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:20 crc kubenswrapper[4688]: I1125 12:28:20.783162 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:21 crc kubenswrapper[4688]: I1125 12:28:21.753669 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2r9dn" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerName="registry-server" containerID="cri-o://56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f" gracePeriod=2 Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.196376 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.232541 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-utilities\") pod \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.232646 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-catalog-content\") pod \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.232697 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlz9z\" (UniqueName: \"kubernetes.io/projected/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-kube-api-access-hlz9z\") pod \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\" (UID: \"3e8fc317-d49d-4c40-b68b-696dd4dd96bd\") " Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.233635 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-utilities" (OuterVolumeSpecName: "utilities") pod "3e8fc317-d49d-4c40-b68b-696dd4dd96bd" (UID: "3e8fc317-d49d-4c40-b68b-696dd4dd96bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.242053 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-kube-api-access-hlz9z" (OuterVolumeSpecName: "kube-api-access-hlz9z") pod "3e8fc317-d49d-4c40-b68b-696dd4dd96bd" (UID: "3e8fc317-d49d-4c40-b68b-696dd4dd96bd"). InnerVolumeSpecName "kube-api-access-hlz9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.248707 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e8fc317-d49d-4c40-b68b-696dd4dd96bd" (UID: "3e8fc317-d49d-4c40-b68b-696dd4dd96bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.334259 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.334333 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.334348 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlz9z\" (UniqueName: \"kubernetes.io/projected/3e8fc317-d49d-4c40-b68b-696dd4dd96bd-kube-api-access-hlz9z\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.375097 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bdmv9" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.699854 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9w5t"] Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.762303 4688 generic.go:334] "Generic (PLEG): container finished" podID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerID="56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f" exitCode=0 Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.762423 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r9dn" event={"ID":"3e8fc317-d49d-4c40-b68b-696dd4dd96bd","Type":"ContainerDied","Data":"56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f"} Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.764064 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r9dn" event={"ID":"3e8fc317-d49d-4c40-b68b-696dd4dd96bd","Type":"ContainerDied","Data":"89cff5e9e3c3ae1cc57c9de40f814a8d87e3c4cb93cd5b74bd0fbe9eb90f2f9b"} Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.764125 4688 scope.go:117] "RemoveContainer" containerID="56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.764306 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r9w5t" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" containerName="registry-server" containerID="cri-o://0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9" gracePeriod=2 Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.762442 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r9dn" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.794328 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r9dn"] Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.797725 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r9dn"] Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.805910 4688 scope.go:117] "RemoveContainer" containerID="97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.920635 4688 scope.go:117] "RemoveContainer" containerID="8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.944439 4688 scope.go:117] "RemoveContainer" containerID="56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f" Nov 25 12:28:22 crc kubenswrapper[4688]: E1125 12:28:22.945155 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f\": container with ID starting with 56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f not found: ID does not exist" containerID="56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.945198 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f"} err="failed to get container status \"56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f\": rpc error: code = NotFound desc = could not find container \"56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f\": container with ID starting with 56bd8ac2bbe8cefe2d636811123f75f989167f598efedccddb3b5c8e808c277f not found: ID does not exist" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.945229 4688 scope.go:117] "RemoveContainer" containerID="97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525" Nov 25 12:28:22 crc kubenswrapper[4688]: E1125 12:28:22.945797 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525\": container with ID starting with 97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525 not found: ID does not exist" containerID="97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.945881 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525"} err="failed to get container status \"97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525\": rpc error: code = NotFound desc = could not find container \"97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525\": container with ID starting with 97b0d0dc64d422cce3d886e4ad3b9da1f05bdffb0fc201a85d7f187e10bdc525 not found: ID does not exist" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.945934 4688 scope.go:117] "RemoveContainer" containerID="8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3" Nov 25 12:28:22 crc kubenswrapper[4688]: E1125 12:28:22.947037 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3\": container with ID starting with 8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3 not found: ID does not exist" containerID="8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3" Nov 25 12:28:22 crc kubenswrapper[4688]: I1125 12:28:22.947096 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3"} err="failed to get container status \"8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3\": rpc error: code = NotFound desc = could not find container \"8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3\": container with ID starting with 8dc8108771426ff3ffde0747b63e449c08059d53a51d9f6dece37df9aada8ff3 not found: ID does not exist" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.223775 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.352334 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh7rc\" (UniqueName: \"kubernetes.io/projected/d88165cf-f348-4918-a623-814e0d4cdabe-kube-api-access-bh7rc\") pod \"d88165cf-f348-4918-a623-814e0d4cdabe\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.352840 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-catalog-content\") pod \"d88165cf-f348-4918-a623-814e0d4cdabe\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.352868 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-utilities\") pod \"d88165cf-f348-4918-a623-814e0d4cdabe\" (UID: \"d88165cf-f348-4918-a623-814e0d4cdabe\") " Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.354229 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-utilities" (OuterVolumeSpecName: "utilities") pod "d88165cf-f348-4918-a623-814e0d4cdabe" (UID: "d88165cf-f348-4918-a623-814e0d4cdabe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.359222 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88165cf-f348-4918-a623-814e0d4cdabe-kube-api-access-bh7rc" (OuterVolumeSpecName: "kube-api-access-bh7rc") pod "d88165cf-f348-4918-a623-814e0d4cdabe" (UID: "d88165cf-f348-4918-a623-814e0d4cdabe"). InnerVolumeSpecName "kube-api-access-bh7rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.454513 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.454602 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh7rc\" (UniqueName: \"kubernetes.io/projected/d88165cf-f348-4918-a623-814e0d4cdabe-kube-api-access-bh7rc\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.775853 4688 generic.go:334] "Generic (PLEG): container finished" podID="d88165cf-f348-4918-a623-814e0d4cdabe" containerID="0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9" exitCode=0 Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.775963 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9w5t" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.775956 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9w5t" event={"ID":"d88165cf-f348-4918-a623-814e0d4cdabe","Type":"ContainerDied","Data":"0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9"} Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.776076 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9w5t" event={"ID":"d88165cf-f348-4918-a623-814e0d4cdabe","Type":"ContainerDied","Data":"13b66ddd581f18c9752e6c56e15c6441ff91930be9e02227693ac99e8f9809ab"} Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.776115 4688 scope.go:117] "RemoveContainer" containerID="0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.798304 4688 scope.go:117] "RemoveContainer" containerID="f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.818502 4688 scope.go:117] "RemoveContainer" containerID="2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.833275 4688 scope.go:117] "RemoveContainer" containerID="0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9" Nov 25 12:28:23 crc kubenswrapper[4688]: E1125 12:28:23.834284 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9\": container with ID starting with 0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9 not found: ID does not exist" containerID="0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.834321 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9"} err="failed to get container status \"0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9\": rpc error: code = NotFound desc = could not find container \"0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9\": container with ID starting with 0ff30e389d34cc6f6288ad7ec18e998f03c12602e8088e66f1e9fec30888c9f9 not found: ID does not exist" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.834343 4688 scope.go:117] "RemoveContainer" containerID="f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce" Nov 25 12:28:23 crc kubenswrapper[4688]: E1125 12:28:23.834853 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce\": container with ID starting with f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce not found: ID does not exist" containerID="f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.834981 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce"} err="failed to get container status \"f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce\": rpc error: code = NotFound desc = could not find container \"f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce\": container with ID starting with f19eddd3b78f06c9e4dc30df77f23058ee7e17c2a78b527575a27e79359df3ce not found: ID does not exist" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.835089 4688 scope.go:117] "RemoveContainer" containerID="2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547" Nov 25 12:28:23 crc kubenswrapper[4688]: E1125 12:28:23.835466 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547\": container with ID starting with 2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547 not found: ID does not exist" containerID="2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.835582 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547"} err="failed to get container status \"2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547\": rpc error: code = NotFound desc = could not find container \"2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547\": container with ID starting with 2bb916cf5b8a9a7cbddd8df48b88ef570b999f20f324681c007e9a55d3514547 not found: ID does not exist" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.945695 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d88165cf-f348-4918-a623-814e0d4cdabe" (UID: "d88165cf-f348-4918-a623-814e0d4cdabe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.963483 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88165cf-f348-4918-a623-814e0d4cdabe-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:23 crc kubenswrapper[4688]: I1125 12:28:23.972663 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lthrr" Nov 25 12:28:24 crc kubenswrapper[4688]: I1125 12:28:24.112387 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9w5t"] Nov 25 12:28:24 crc kubenswrapper[4688]: I1125 12:28:24.121879 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r9w5t"] Nov 25 12:28:24 crc kubenswrapper[4688]: I1125 12:28:24.748653 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" path="/var/lib/kubelet/pods/3e8fc317-d49d-4c40-b68b-696dd4dd96bd/volumes" Nov 25 12:28:24 crc kubenswrapper[4688]: I1125 12:28:24.749427 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" path="/var/lib/kubelet/pods/d88165cf-f348-4918-a623-814e0d4cdabe/volumes" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.105348 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-47kgw"] Nov 25 12:28:30 crc kubenswrapper[4688]: E1125 12:28:30.107580 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerName="extract-utilities" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.107659 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerName="extract-utilities" Nov 25 12:28:30 crc kubenswrapper[4688]: E1125 12:28:30.107723 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" containerName="registry-server" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.107778 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" containerName="registry-server" Nov 25 12:28:30 crc kubenswrapper[4688]: E1125 12:28:30.107838 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" containerName="extract-utilities" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.107929 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" containerName="extract-utilities" Nov 25 12:28:30 crc kubenswrapper[4688]: E1125 12:28:30.108002 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerName="registry-server" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.108058 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerName="registry-server" Nov 25 12:28:30 crc kubenswrapper[4688]: E1125 12:28:30.108132 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" containerName="extract-content" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.108202 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" containerName="extract-content" Nov 25 12:28:30 crc kubenswrapper[4688]: E1125 12:28:30.108265 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerName="extract-content" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.108327 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerName="extract-content" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.108540 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88165cf-f348-4918-a623-814e0d4cdabe" containerName="registry-server" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.108611 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8fc317-d49d-4c40-b68b-696dd4dd96bd" containerName="registry-server" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.109071 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.112187 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.112192 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.112422 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kccm7" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.126703 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-47kgw"] Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.150094 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwcp9\" (UniqueName: \"kubernetes.io/projected/eef7b939-1770-40d0-8ba8-9458f9160a52-kube-api-access-bwcp9\") pod \"openstack-operator-index-47kgw\" (UID: \"eef7b939-1770-40d0-8ba8-9458f9160a52\") " pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.251623 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwcp9\" (UniqueName: \"kubernetes.io/projected/eef7b939-1770-40d0-8ba8-9458f9160a52-kube-api-access-bwcp9\") pod \"openstack-operator-index-47kgw\" (UID: \"eef7b939-1770-40d0-8ba8-9458f9160a52\") " pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.274598 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwcp9\" (UniqueName: \"kubernetes.io/projected/eef7b939-1770-40d0-8ba8-9458f9160a52-kube-api-access-bwcp9\") pod \"openstack-operator-index-47kgw\" (UID: \"eef7b939-1770-40d0-8ba8-9458f9160a52\") " pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.430543 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:30 crc kubenswrapper[4688]: I1125 12:28:30.848495 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-47kgw"] Nov 25 12:28:30 crc kubenswrapper[4688]: W1125 12:28:30.855248 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef7b939_1770_40d0_8ba8_9458f9160a52.slice/crio-828766d70dba34ce68840cc3595ab300ab9a1f4c0a86f340265e57a84ecc18fc WatchSource:0}: Error finding container 828766d70dba34ce68840cc3595ab300ab9a1f4c0a86f340265e57a84ecc18fc: Status 404 returned error can't find the container with id 828766d70dba34ce68840cc3595ab300ab9a1f4c0a86f340265e57a84ecc18fc Nov 25 12:28:31 crc kubenswrapper[4688]: I1125 12:28:31.830322 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-47kgw" event={"ID":"eef7b939-1770-40d0-8ba8-9458f9160a52","Type":"ContainerStarted","Data":"828766d70dba34ce68840cc3595ab300ab9a1f4c0a86f340265e57a84ecc18fc"} Nov 25 12:28:32 crc kubenswrapper[4688]: I1125 12:28:32.371972 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5rbtk" Nov 25 12:28:33 crc kubenswrapper[4688]: I1125 12:28:33.844598 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-47kgw" event={"ID":"eef7b939-1770-40d0-8ba8-9458f9160a52","Type":"ContainerStarted","Data":"f2dfdaaa08bc2355006827f6e79f10acfda2e6d1c8c36c6adb3d97d77412ea95"} Nov 25 12:28:33 crc kubenswrapper[4688]: I1125 12:28:33.864068 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-47kgw" podStartSLOduration=1.258929037 podStartE2EDuration="3.864047579s" podCreationTimestamp="2025-11-25 12:28:30 +0000 UTC" firstStartedPulling="2025-11-25 12:28:30.856961263 +0000 UTC m=+860.966590131" lastFinishedPulling="2025-11-25 12:28:33.462079805 +0000 UTC m=+863.571708673" observedRunningTime="2025-11-25 12:28:33.862637281 +0000 UTC m=+863.972266169" watchObservedRunningTime="2025-11-25 12:28:33.864047579 +0000 UTC m=+863.973676447" Nov 25 12:28:40 crc kubenswrapper[4688]: I1125 12:28:40.430875 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:40 crc kubenswrapper[4688]: I1125 12:28:40.431571 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:40 crc kubenswrapper[4688]: I1125 12:28:40.463969 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:40 crc kubenswrapper[4688]: I1125 12:28:40.922939 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-47kgw" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.538140 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl"] Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.540098 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.541856 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bf552" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.552550 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl"] Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.683821 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-bundle\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.683864 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-util\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.683912 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fht8t\" (UniqueName: \"kubernetes.io/projected/d23f2ee5-379f-4df5-9650-915df314ec2a-kube-api-access-fht8t\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.784802 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fht8t\" (UniqueName: \"kubernetes.io/projected/d23f2ee5-379f-4df5-9650-915df314ec2a-kube-api-access-fht8t\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.784912 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-bundle\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.784936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-util\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.785391 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-util\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.785668 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-bundle\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.807184 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fht8t\" (UniqueName: \"kubernetes.io/projected/d23f2ee5-379f-4df5-9650-915df314ec2a-kube-api-access-fht8t\") pod \"4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:46 crc kubenswrapper[4688]: I1125 12:28:46.856463 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:47 crc kubenswrapper[4688]: I1125 12:28:47.340622 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl"] Nov 25 12:28:47 crc kubenswrapper[4688]: I1125 12:28:47.943708 4688 generic.go:334] "Generic (PLEG): container finished" podID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerID="14959ba0eb0161cc7165a8ef5849a2986841ef85309e8597df413a8ac3e995ff" exitCode=0 Nov 25 12:28:47 crc kubenswrapper[4688]: I1125 12:28:47.943760 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" event={"ID":"d23f2ee5-379f-4df5-9650-915df314ec2a","Type":"ContainerDied","Data":"14959ba0eb0161cc7165a8ef5849a2986841ef85309e8597df413a8ac3e995ff"} Nov 25 12:28:47 crc kubenswrapper[4688]: I1125 12:28:47.944019 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" event={"ID":"d23f2ee5-379f-4df5-9650-915df314ec2a","Type":"ContainerStarted","Data":"40fb8910bf0178bf3613a8a88eacf91c1aea9190fe893cbb927d1cd0fb3c91c7"} Nov 25 12:28:49 crc kubenswrapper[4688]: I1125 12:28:49.958178 4688 generic.go:334] "Generic (PLEG): container finished" podID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerID="2234c9f56433a7f4c8b789371989f1806ed1ec1c7844b8ded4c2f1a4a52cc976" exitCode=0 Nov 25 12:28:49 crc kubenswrapper[4688]: I1125 12:28:49.958812 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" event={"ID":"d23f2ee5-379f-4df5-9650-915df314ec2a","Type":"ContainerDied","Data":"2234c9f56433a7f4c8b789371989f1806ed1ec1c7844b8ded4c2f1a4a52cc976"} Nov 25 12:28:50 crc kubenswrapper[4688]: I1125 12:28:50.970737 4688 generic.go:334] "Generic (PLEG): container finished" podID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerID="a31639a40d177de4322bf714c867d3c0026081eed025db49ca44805c9002eb75" exitCode=0 Nov 25 12:28:50 crc kubenswrapper[4688]: I1125 12:28:50.970839 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" event={"ID":"d23f2ee5-379f-4df5-9650-915df314ec2a","Type":"ContainerDied","Data":"a31639a40d177de4322bf714c867d3c0026081eed025db49ca44805c9002eb75"} Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.262881 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.369011 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-bundle\") pod \"d23f2ee5-379f-4df5-9650-915df314ec2a\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.369369 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-util\") pod \"d23f2ee5-379f-4df5-9650-915df314ec2a\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.369453 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fht8t\" (UniqueName: \"kubernetes.io/projected/d23f2ee5-379f-4df5-9650-915df314ec2a-kube-api-access-fht8t\") pod \"d23f2ee5-379f-4df5-9650-915df314ec2a\" (UID: \"d23f2ee5-379f-4df5-9650-915df314ec2a\") " Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.370384 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-bundle" (OuterVolumeSpecName: "bundle") pod "d23f2ee5-379f-4df5-9650-915df314ec2a" (UID: "d23f2ee5-379f-4df5-9650-915df314ec2a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.378035 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d23f2ee5-379f-4df5-9650-915df314ec2a-kube-api-access-fht8t" (OuterVolumeSpecName: "kube-api-access-fht8t") pod "d23f2ee5-379f-4df5-9650-915df314ec2a" (UID: "d23f2ee5-379f-4df5-9650-915df314ec2a"). InnerVolumeSpecName "kube-api-access-fht8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.383352 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-util" (OuterVolumeSpecName: "util") pod "d23f2ee5-379f-4df5-9650-915df314ec2a" (UID: "d23f2ee5-379f-4df5-9650-915df314ec2a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.470873 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.470955 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d23f2ee5-379f-4df5-9650-915df314ec2a-util\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.470972 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fht8t\" (UniqueName: \"kubernetes.io/projected/d23f2ee5-379f-4df5-9650-915df314ec2a-kube-api-access-fht8t\") on node \"crc\" DevicePath \"\"" Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.983035 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" event={"ID":"d23f2ee5-379f-4df5-9650-915df314ec2a","Type":"ContainerDied","Data":"40fb8910bf0178bf3613a8a88eacf91c1aea9190fe893cbb927d1cd0fb3c91c7"} Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.983358 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40fb8910bf0178bf3613a8a88eacf91c1aea9190fe893cbb927d1cd0fb3c91c7" Nov 25 12:28:52 crc kubenswrapper[4688]: I1125 12:28:52.983278 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.138785 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4"] Nov 25 12:28:59 crc kubenswrapper[4688]: E1125 12:28:59.139260 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerName="pull" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.139274 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerName="pull" Nov 25 12:28:59 crc kubenswrapper[4688]: E1125 12:28:59.139285 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerName="extract" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.139291 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerName="extract" Nov 25 12:28:59 crc kubenswrapper[4688]: E1125 12:28:59.139300 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerName="util" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.139306 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerName="util" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.140942 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23f2ee5-379f-4df5-9650-915df314ec2a" containerName="extract" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.148365 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.159133 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-dh289" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.178737 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4"] Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.276811 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnz2z\" (UniqueName: \"kubernetes.io/projected/34cf6884-a630-417d-81ff-08c5ff19be31-kube-api-access-hnz2z\") pod \"openstack-operator-controller-operator-9644ff45d-57xk4\" (UID: \"34cf6884-a630-417d-81ff-08c5ff19be31\") " pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.378105 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnz2z\" (UniqueName: \"kubernetes.io/projected/34cf6884-a630-417d-81ff-08c5ff19be31-kube-api-access-hnz2z\") pod \"openstack-operator-controller-operator-9644ff45d-57xk4\" (UID: \"34cf6884-a630-417d-81ff-08c5ff19be31\") " pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.397771 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnz2z\" (UniqueName: \"kubernetes.io/projected/34cf6884-a630-417d-81ff-08c5ff19be31-kube-api-access-hnz2z\") pod \"openstack-operator-controller-operator-9644ff45d-57xk4\" (UID: \"34cf6884-a630-417d-81ff-08c5ff19be31\") " pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.484190 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:28:59 crc kubenswrapper[4688]: I1125 12:28:59.746103 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4"] Nov 25 12:29:00 crc kubenswrapper[4688]: I1125 12:29:00.029624 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" event={"ID":"34cf6884-a630-417d-81ff-08c5ff19be31","Type":"ContainerStarted","Data":"adbe8b422a02e8fbd518574ad4ddea3fc5b88bdb065244ce4840fb28e6c1e503"} Nov 25 12:29:04 crc kubenswrapper[4688]: I1125 12:29:04.053608 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" event={"ID":"34cf6884-a630-417d-81ff-08c5ff19be31","Type":"ContainerStarted","Data":"58009941fdf5ac373d610efb9126841c18d84942f03eef251a439162e15ded6e"} Nov 25 12:29:04 crc kubenswrapper[4688]: I1125 12:29:04.054098 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:29:04 crc kubenswrapper[4688]: I1125 12:29:04.081953 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" podStartSLOduration=1.722284434 podStartE2EDuration="5.081938032s" podCreationTimestamp="2025-11-25 12:28:59 +0000 UTC" firstStartedPulling="2025-11-25 12:28:59.765916673 +0000 UTC m=+889.875545541" lastFinishedPulling="2025-11-25 12:29:03.125570271 +0000 UTC m=+893.235199139" observedRunningTime="2025-11-25 12:29:04.0785387 +0000 UTC m=+894.188167578" watchObservedRunningTime="2025-11-25 12:29:04.081938032 +0000 UTC m=+894.191566900" Nov 25 12:29:09 crc kubenswrapper[4688]: I1125 12:29:09.487155 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.422709 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.424450 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.426560 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dgtg2" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.450619 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.452207 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.454004 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-csmhw" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.455908 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.464877 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.466301 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.469267 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-p4mpl" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.488177 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.506510 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.545258 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-2snng"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.556388 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.558062 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmczc\" (UniqueName: \"kubernetes.io/projected/5c7a1a6d-a3f3-4490-a6ba-f521535a1364-kube-api-access-rmczc\") pod \"glance-operator-controller-manager-68b95954c9-2snng\" (UID: \"5c7a1a6d-a3f3-4490-a6ba-f521535a1364\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.558152 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm6qg\" (UniqueName: \"kubernetes.io/projected/87bbdcd1-48cf-4310-9131-93dadc55a0f1-kube-api-access-dm6qg\") pod \"cinder-operator-controller-manager-79856dc55c-ptqrp\" (UID: \"87bbdcd1-48cf-4310-9131-93dadc55a0f1\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.558203 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2zxr\" (UniqueName: \"kubernetes.io/projected/6efe1c76-76a3-4c72-bb71-0963553bbb98-kube-api-access-s2zxr\") pod \"barbican-operator-controller-manager-86dc4d89c8-q4ffj\" (UID: \"6efe1c76-76a3-4c72-bb71-0963553bbb98\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.558257 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792m5\" (UniqueName: \"kubernetes.io/projected/acc9de1c-caf4-40f2-8e3c-470f1059599a-kube-api-access-792m5\") pod \"designate-operator-controller-manager-7d695c9b56-vkj6d\" (UID: \"acc9de1c-caf4-40f2-8e3c-470f1059599a\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.560978 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c56v7" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.566601 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-2snng"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.575124 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.576267 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.580981 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7c6np" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.597675 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.644626 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.646145 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.646394 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.648145 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.655418 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rvmp5" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.655690 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.656109 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wxlh5" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.659974 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2zxr\" (UniqueName: \"kubernetes.io/projected/6efe1c76-76a3-4c72-bb71-0963553bbb98-kube-api-access-s2zxr\") pod \"barbican-operator-controller-manager-86dc4d89c8-q4ffj\" (UID: \"6efe1c76-76a3-4c72-bb71-0963553bbb98\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.660043 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-792m5\" (UniqueName: \"kubernetes.io/projected/acc9de1c-caf4-40f2-8e3c-470f1059599a-kube-api-access-792m5\") pod \"designate-operator-controller-manager-7d695c9b56-vkj6d\" (UID: \"acc9de1c-caf4-40f2-8e3c-470f1059599a\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.661399 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmczc\" (UniqueName: \"kubernetes.io/projected/5c7a1a6d-a3f3-4490-a6ba-f521535a1364-kube-api-access-rmczc\") pod \"glance-operator-controller-manager-68b95954c9-2snng\" (UID: \"5c7a1a6d-a3f3-4490-a6ba-f521535a1364\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.661564 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm6qg\" (UniqueName: \"kubernetes.io/projected/87bbdcd1-48cf-4310-9131-93dadc55a0f1-kube-api-access-dm6qg\") pod \"cinder-operator-controller-manager-79856dc55c-ptqrp\" (UID: \"87bbdcd1-48cf-4310-9131-93dadc55a0f1\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.674881 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.676384 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.685501 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-htqjx" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.696868 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm6qg\" (UniqueName: \"kubernetes.io/projected/87bbdcd1-48cf-4310-9131-93dadc55a0f1-kube-api-access-dm6qg\") pod \"cinder-operator-controller-manager-79856dc55c-ptqrp\" (UID: \"87bbdcd1-48cf-4310-9131-93dadc55a0f1\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.700615 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.716776 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-792m5\" (UniqueName: \"kubernetes.io/projected/acc9de1c-caf4-40f2-8e3c-470f1059599a-kube-api-access-792m5\") pod \"designate-operator-controller-manager-7d695c9b56-vkj6d\" (UID: \"acc9de1c-caf4-40f2-8e3c-470f1059599a\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.727685 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.728319 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2zxr\" (UniqueName: \"kubernetes.io/projected/6efe1c76-76a3-4c72-bb71-0963553bbb98-kube-api-access-s2zxr\") pod \"barbican-operator-controller-manager-86dc4d89c8-q4ffj\" (UID: \"6efe1c76-76a3-4c72-bb71-0963553bbb98\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.746441 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.746947 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmczc\" (UniqueName: \"kubernetes.io/projected/5c7a1a6d-a3f3-4490-a6ba-f521535a1364-kube-api-access-rmczc\") pod \"glance-operator-controller-manager-68b95954c9-2snng\" (UID: \"5c7a1a6d-a3f3-4490-a6ba-f521535a1364\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.764515 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsj9\" (UniqueName: \"kubernetes.io/projected/92794534-2689-4fde-8597-4cc766d7b3b0-kube-api-access-5vsj9\") pod \"heat-operator-controller-manager-774b86978c-b9jdn\" (UID: \"92794534-2689-4fde-8597-4cc766d7b3b0\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.764624 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-q2tdz\" (UID: \"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.764704 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8d9c\" (UniqueName: \"kubernetes.io/projected/55967ae9-2dad-4d45-a8c3-bdaa483f9ea7-kube-api-access-h8d9c\") pod \"horizon-operator-controller-manager-68c9694994-zfvn2\" (UID: \"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.764737 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf87c\" (UniqueName: \"kubernetes.io/projected/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-kube-api-access-kf87c\") pod \"infra-operator-controller-manager-d5cc86f4b-q2tdz\" (UID: \"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.768188 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.768241 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.769401 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.771124 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.775776 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gmvqm" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.784986 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.794131 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.795513 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.832911 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rgdkl" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.835610 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.865560 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8d9c\" (UniqueName: \"kubernetes.io/projected/55967ae9-2dad-4d45-a8c3-bdaa483f9ea7-kube-api-access-h8d9c\") pod \"horizon-operator-controller-manager-68c9694994-zfvn2\" (UID: \"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.865625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf87c\" (UniqueName: \"kubernetes.io/projected/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-kube-api-access-kf87c\") pod \"infra-operator-controller-manager-d5cc86f4b-q2tdz\" (UID: \"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.865661 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vsj9\" (UniqueName: \"kubernetes.io/projected/92794534-2689-4fde-8597-4cc766d7b3b0-kube-api-access-5vsj9\") pod \"heat-operator-controller-manager-774b86978c-b9jdn\" (UID: \"92794534-2689-4fde-8597-4cc766d7b3b0\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.865699 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-q2tdz\" (UID: \"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.865747 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqw7s\" (UniqueName: \"kubernetes.io/projected/78451e33-7e86-4635-ac5f-d2c6a9ae6e71-kube-api-access-kqw7s\") pod \"ironic-operator-controller-manager-5bfcdc958c-tn6tq\" (UID: \"78451e33-7e86-4635-ac5f-d2c6a9ae6e71\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:29:26 crc kubenswrapper[4688]: E1125 12:29:26.867065 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 12:29:26 crc kubenswrapper[4688]: E1125 12:29:26.867118 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert podName:0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d nodeName:}" failed. No retries permitted until 2025-11-25 12:29:27.367097933 +0000 UTC m=+917.476726801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert") pod "infra-operator-controller-manager-d5cc86f4b-q2tdz" (UID: "0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d") : secret "infra-operator-webhook-server-cert" not found Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.872931 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.875471 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.881712 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.890661 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.892419 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.900194 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.917480 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9n9d8" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.918594 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8d9c\" (UniqueName: \"kubernetes.io/projected/55967ae9-2dad-4d45-a8c3-bdaa483f9ea7-kube-api-access-h8d9c\") pod \"horizon-operator-controller-manager-68c9694994-zfvn2\" (UID: \"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.921237 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vsj9\" (UniqueName: \"kubernetes.io/projected/92794534-2689-4fde-8597-4cc766d7b3b0-kube-api-access-5vsj9\") pod \"heat-operator-controller-manager-774b86978c-b9jdn\" (UID: \"92794534-2689-4fde-8597-4cc766d7b3b0\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.921673 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf87c\" (UniqueName: \"kubernetes.io/projected/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-kube-api-access-kf87c\") pod \"infra-operator-controller-manager-d5cc86f4b-q2tdz\" (UID: \"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.918989 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cmtn6" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.922292 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.944600 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.950623 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r"] Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.952769 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.965759 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-g5kd9" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.967057 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqw7s\" (UniqueName: \"kubernetes.io/projected/78451e33-7e86-4635-ac5f-d2c6a9ae6e71-kube-api-access-kqw7s\") pod \"ironic-operator-controller-manager-5bfcdc958c-tn6tq\" (UID: \"78451e33-7e86-4635-ac5f-d2c6a9ae6e71\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.967118 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvpml\" (UniqueName: \"kubernetes.io/projected/592ea8b1-efc4-4027-a7dc-3943125fd935-kube-api-access-jvpml\") pod \"keystone-operator-controller-manager-748dc6576f-9qfpp\" (UID: \"592ea8b1-efc4-4027-a7dc-3943125fd935\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.967166 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc45t\" (UniqueName: \"kubernetes.io/projected/94f12846-9cbe-4997-9160-3545778ecfde-kube-api-access-zc45t\") pod \"manila-operator-controller-manager-58bb8d67cc-vcnvc\" (UID: \"94f12846-9cbe-4997-9160-3545778ecfde\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:29:26 crc kubenswrapper[4688]: I1125 12:29:26.970945 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.000849 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.004953 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.013539 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.017564 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqw7s\" (UniqueName: \"kubernetes.io/projected/78451e33-7e86-4635-ac5f-d2c6a9ae6e71-kube-api-access-kqw7s\") pod \"ironic-operator-controller-manager-5bfcdc958c-tn6tq\" (UID: \"78451e33-7e86-4635-ac5f-d2c6a9ae6e71\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.023265 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.023430 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zhzbv" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.070118 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qps8q\" (UniqueName: \"kubernetes.io/projected/808a5b9f-95a2-4f58-abe2-30758a6a7e2a-kube-api-access-qps8q\") pod \"neutron-operator-controller-manager-7c57c8bbc4-ltlms\" (UID: \"808a5b9f-95a2-4f58-abe2-30758a6a7e2a\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.070220 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xfb\" (UniqueName: \"kubernetes.io/projected/e2f91df4-3b39-4c05-9fee-dd3f7622fd13-kube-api-access-z8xfb\") pod \"nova-operator-controller-manager-79556f57fc-kvt5r\" (UID: \"e2f91df4-3b39-4c05-9fee-dd3f7622fd13\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.070253 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pzg\" (UniqueName: \"kubernetes.io/projected/7cd9dc7e-be06-416a-aebe-c0b160c79697-kube-api-access-w9pzg\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-94snn\" (UID: \"7cd9dc7e-be06-416a-aebe-c0b160c79697\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.070283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvpml\" (UniqueName: \"kubernetes.io/projected/592ea8b1-efc4-4027-a7dc-3943125fd935-kube-api-access-jvpml\") pod \"keystone-operator-controller-manager-748dc6576f-9qfpp\" (UID: \"592ea8b1-efc4-4027-a7dc-3943125fd935\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.070315 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc45t\" (UniqueName: \"kubernetes.io/projected/94f12846-9cbe-4997-9160-3545778ecfde-kube-api-access-zc45t\") pod \"manila-operator-controller-manager-58bb8d67cc-vcnvc\" (UID: \"94f12846-9cbe-4997-9160-3545778ecfde\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.072684 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.073781 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.091295 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.092409 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.110327 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.137039 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.175106 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.185813 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.186257 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wzwvc" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.188208 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l8kk9" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.208463 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc45t\" (UniqueName: \"kubernetes.io/projected/94f12846-9cbe-4997-9160-3545778ecfde-kube-api-access-zc45t\") pod \"manila-operator-controller-manager-58bb8d67cc-vcnvc\" (UID: \"94f12846-9cbe-4997-9160-3545778ecfde\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.256409 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.258183 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvpml\" (UniqueName: \"kubernetes.io/projected/592ea8b1-efc4-4027-a7dc-3943125fd935-kube-api-access-jvpml\") pod \"keystone-operator-controller-manager-748dc6576f-9qfpp\" (UID: \"592ea8b1-efc4-4027-a7dc-3943125fd935\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.273091 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8xfb\" (UniqueName: \"kubernetes.io/projected/e2f91df4-3b39-4c05-9fee-dd3f7622fd13-kube-api-access-z8xfb\") pod \"nova-operator-controller-manager-79556f57fc-kvt5r\" (UID: \"e2f91df4-3b39-4c05-9fee-dd3f7622fd13\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.273287 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pzg\" (UniqueName: \"kubernetes.io/projected/7cd9dc7e-be06-416a-aebe-c0b160c79697-kube-api-access-w9pzg\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-94snn\" (UID: \"7cd9dc7e-be06-416a-aebe-c0b160c79697\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.273356 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtwsl\" (UniqueName: \"kubernetes.io/projected/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-kube-api-access-dtwsl\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.274400 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qps8q\" (UniqueName: \"kubernetes.io/projected/808a5b9f-95a2-4f58-abe2-30758a6a7e2a-kube-api-access-qps8q\") pod \"neutron-operator-controller-manager-7c57c8bbc4-ltlms\" (UID: \"808a5b9f-95a2-4f58-abe2-30758a6a7e2a\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.275459 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9z9l\" (UniqueName: \"kubernetes.io/projected/6efa691a-9f05-4d6a-8517-cba5b00426cd-kube-api-access-s9z9l\") pod \"octavia-operator-controller-manager-fd75fd47d-4zlm5\" (UID: \"6efa691a-9f05-4d6a-8517-cba5b00426cd\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.275823 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.317838 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.324282 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.347420 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.352425 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-q2s8c" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.353469 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qps8q\" (UniqueName: \"kubernetes.io/projected/808a5b9f-95a2-4f58-abe2-30758a6a7e2a-kube-api-access-qps8q\") pod \"neutron-operator-controller-manager-7c57c8bbc4-ltlms\" (UID: \"808a5b9f-95a2-4f58-abe2-30758a6a7e2a\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.408616 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.409512 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pzg\" (UniqueName: \"kubernetes.io/projected/7cd9dc7e-be06-416a-aebe-c0b160c79697-kube-api-access-w9pzg\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-94snn\" (UID: \"7cd9dc7e-be06-416a-aebe-c0b160c79697\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.413390 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9z9l\" (UniqueName: \"kubernetes.io/projected/6efa691a-9f05-4d6a-8517-cba5b00426cd-kube-api-access-s9z9l\") pod \"octavia-operator-controller-manager-fd75fd47d-4zlm5\" (UID: \"6efa691a-9f05-4d6a-8517-cba5b00426cd\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.413497 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.413564 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-q2tdz\" (UID: \"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.413672 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2ns6\" (UniqueName: \"kubernetes.io/projected/fa49233e-de1b-4bea-85a6-de285e0e60f6-kube-api-access-k2ns6\") pod \"ovn-operator-controller-manager-66cf5c67ff-gzslz\" (UID: \"fa49233e-de1b-4bea-85a6-de285e0e60f6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.413758 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtwsl\" (UniqueName: \"kubernetes.io/projected/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-kube-api-access-dtwsl\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.414286 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.414337 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert podName:7f63e16e-9d9b-4e1a-b497-1417e8e7b79e nodeName:}" failed. No retries permitted until 2025-11-25 12:29:27.914322948 +0000 UTC m=+918.023951816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" (UID: "7f63e16e-9d9b-4e1a-b497-1417e8e7b79e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.414509 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.414557 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert podName:0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d nodeName:}" failed. No retries permitted until 2025-11-25 12:29:28.414548204 +0000 UTC m=+918.524177072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert") pod "infra-operator-controller-manager-d5cc86f4b-q2tdz" (UID: "0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d") : secret "infra-operator-webhook-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.416223 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-plml2" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.418863 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8xfb\" (UniqueName: \"kubernetes.io/projected/e2f91df4-3b39-4c05-9fee-dd3f7622fd13-kube-api-access-z8xfb\") pod \"nova-operator-controller-manager-79556f57fc-kvt5r\" (UID: \"e2f91df4-3b39-4c05-9fee-dd3f7622fd13\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.436934 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.457205 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.457289 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9z9l\" (UniqueName: \"kubernetes.io/projected/6efa691a-9f05-4d6a-8517-cba5b00426cd-kube-api-access-s9z9l\") pod \"octavia-operator-controller-manager-fd75fd47d-4zlm5\" (UID: \"6efa691a-9f05-4d6a-8517-cba5b00426cd\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.457319 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.461655 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.468599 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rjthr" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.480643 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtwsl\" (UniqueName: \"kubernetes.io/projected/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-kube-api-access-dtwsl\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.484603 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.493236 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.508452 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.522906 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgpt\" (UniqueName: \"kubernetes.io/projected/d4c78fcc-139a-4485-8628-dc14422a4710-kube-api-access-gzgpt\") pod \"swift-operator-controller-manager-6fdc4fcf86-c76gt\" (UID: \"d4c78fcc-139a-4485-8628-dc14422a4710\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.523009 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2ns6\" (UniqueName: \"kubernetes.io/projected/fa49233e-de1b-4bea-85a6-de285e0e60f6-kube-api-access-k2ns6\") pod \"ovn-operator-controller-manager-66cf5c67ff-gzslz\" (UID: \"fa49233e-de1b-4bea-85a6-de285e0e60f6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.523031 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59m2\" (UniqueName: \"kubernetes.io/projected/3649a66a-709f-4b77-b798-e5f90eeb2e5d-kube-api-access-h59m2\") pod \"placement-operator-controller-manager-5db546f9d9-nxnmg\" (UID: \"3649a66a-709f-4b77-b798-e5f90eeb2e5d\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.523066 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcsm\" (UniqueName: \"kubernetes.io/projected/3f65195f-4002-4d44-a25c-3c2603ed14c6-kube-api-access-kjcsm\") pod \"telemetry-operator-controller-manager-c877c965-jptwb\" (UID: \"3f65195f-4002-4d44-a25c-3c2603ed14c6\") " pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.526035 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.528504 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.528580 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-gf8vv"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.529519 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.529621 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.533537 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.540692 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-gf8vv"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.541171 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-7r9qr" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.541363 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-prjxn" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.573912 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.580631 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2ns6\" (UniqueName: \"kubernetes.io/projected/fa49233e-de1b-4bea-85a6-de285e0e60f6-kube-api-access-k2ns6\") pod \"ovn-operator-controller-manager-66cf5c67ff-gzslz\" (UID: \"fa49233e-de1b-4bea-85a6-de285e0e60f6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.621198 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.624158 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgpt\" (UniqueName: \"kubernetes.io/projected/d4c78fcc-139a-4485-8628-dc14422a4710-kube-api-access-gzgpt\") pod \"swift-operator-controller-manager-6fdc4fcf86-c76gt\" (UID: \"d4c78fcc-139a-4485-8628-dc14422a4710\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.624236 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59m2\" (UniqueName: \"kubernetes.io/projected/3649a66a-709f-4b77-b798-e5f90eeb2e5d-kube-api-access-h59m2\") pod \"placement-operator-controller-manager-5db546f9d9-nxnmg\" (UID: \"3649a66a-709f-4b77-b798-e5f90eeb2e5d\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.624265 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wv5\" (UniqueName: \"kubernetes.io/projected/ae188502-8c93-4a53-bb69-b9a964c82bc6-kube-api-access-z4wv5\") pod \"watcher-operator-controller-manager-864885998-gf8vv\" (UID: \"ae188502-8c93-4a53-bb69-b9a964c82bc6\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.624287 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwvrc\" (UniqueName: \"kubernetes.io/projected/59ac66df-a38a-4193-a6ff-fd4e74b1b113-kube-api-access-wwvrc\") pod \"test-operator-controller-manager-5cb74df96-dcnc8\" (UID: \"59ac66df-a38a-4193-a6ff-fd4e74b1b113\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.624312 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcsm\" (UniqueName: \"kubernetes.io/projected/3f65195f-4002-4d44-a25c-3c2603ed14c6-kube-api-access-kjcsm\") pod \"telemetry-operator-controller-manager-c877c965-jptwb\" (UID: \"3f65195f-4002-4d44-a25c-3c2603ed14c6\") " pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.674508 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcsm\" (UniqueName: \"kubernetes.io/projected/3f65195f-4002-4d44-a25c-3c2603ed14c6-kube-api-access-kjcsm\") pod \"telemetry-operator-controller-manager-c877c965-jptwb\" (UID: \"3f65195f-4002-4d44-a25c-3c2603ed14c6\") " pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.675398 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59m2\" (UniqueName: \"kubernetes.io/projected/3649a66a-709f-4b77-b798-e5f90eeb2e5d-kube-api-access-h59m2\") pod \"placement-operator-controller-manager-5db546f9d9-nxnmg\" (UID: \"3649a66a-709f-4b77-b798-e5f90eeb2e5d\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.681912 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgpt\" (UniqueName: \"kubernetes.io/projected/d4c78fcc-139a-4485-8628-dc14422a4710-kube-api-access-gzgpt\") pod \"swift-operator-controller-manager-6fdc4fcf86-c76gt\" (UID: \"d4c78fcc-139a-4485-8628-dc14422a4710\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.682283 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.717947 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.720955 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.725305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwvrc\" (UniqueName: \"kubernetes.io/projected/59ac66df-a38a-4193-a6ff-fd4e74b1b113-kube-api-access-wwvrc\") pod \"test-operator-controller-manager-5cb74df96-dcnc8\" (UID: \"59ac66df-a38a-4193-a6ff-fd4e74b1b113\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.725351 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wv5\" (UniqueName: \"kubernetes.io/projected/ae188502-8c93-4a53-bb69-b9a964c82bc6-kube-api-access-z4wv5\") pod \"watcher-operator-controller-manager-864885998-gf8vv\" (UID: \"ae188502-8c93-4a53-bb69-b9a964c82bc6\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.731187 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.731374 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-g74wp" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.731468 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.731954 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.768479 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wv5\" (UniqueName: \"kubernetes.io/projected/ae188502-8c93-4a53-bb69-b9a964c82bc6-kube-api-access-z4wv5\") pod \"watcher-operator-controller-manager-864885998-gf8vv\" (UID: \"ae188502-8c93-4a53-bb69-b9a964c82bc6\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.776838 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwvrc\" (UniqueName: \"kubernetes.io/projected/59ac66df-a38a-4193-a6ff-fd4e74b1b113-kube-api-access-wwvrc\") pod \"test-operator-controller-manager-5cb74df96-dcnc8\" (UID: \"59ac66df-a38a-4193-a6ff-fd4e74b1b113\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.826672 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.827796 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.829203 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.829305 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq8v4\" (UniqueName: \"kubernetes.io/projected/1364865a-3285-428d-b672-064400c43c94-kube-api-access-rq8v4\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.829340 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phtgs\" (UniqueName: \"kubernetes.io/projected/93553656-ef25-4318-81f1-a4e7f973ed38-kube-api-access-phtgs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wf6w6\" (UID: \"93553656-ef25-4318-81f1-a4e7f973ed38\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.829368 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.829447 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6"] Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.830634 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nfg9m" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.931216 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq8v4\" (UniqueName: \"kubernetes.io/projected/1364865a-3285-428d-b672-064400c43c94-kube-api-access-rq8v4\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.931287 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phtgs\" (UniqueName: \"kubernetes.io/projected/93553656-ef25-4318-81f1-a4e7f973ed38-kube-api-access-phtgs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wf6w6\" (UID: \"93553656-ef25-4318-81f1-a4e7f973ed38\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.931321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.931391 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.931427 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.931625 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.931681 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert podName:7f63e16e-9d9b-4e1a-b497-1417e8e7b79e nodeName:}" failed. No retries permitted until 2025-11-25 12:29:28.931663723 +0000 UTC m=+919.041292591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" (UID: "7f63e16e-9d9b-4e1a-b497-1417e8e7b79e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.932619 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.932652 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs podName:1364865a-3285-428d-b672-064400c43c94 nodeName:}" failed. No retries permitted until 2025-11-25 12:29:28.43264225 +0000 UTC m=+918.542271118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs") pod "openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" (UID: "1364865a-3285-428d-b672-064400c43c94") : secret "webhook-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.932707 4688 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: E1125 12:29:27.932733 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs podName:1364865a-3285-428d-b672-064400c43c94 nodeName:}" failed. No retries permitted until 2025-11-25 12:29:28.432724992 +0000 UTC m=+918.542353860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs") pod "openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" (UID: "1364865a-3285-428d-b672-064400c43c94") : secret "metrics-server-cert" not found Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.984780 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq8v4\" (UniqueName: \"kubernetes.io/projected/1364865a-3285-428d-b672-064400c43c94-kube-api-access-rq8v4\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:27 crc kubenswrapper[4688]: I1125 12:29:27.989080 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phtgs\" (UniqueName: \"kubernetes.io/projected/93553656-ef25-4318-81f1-a4e7f973ed38-kube-api-access-phtgs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wf6w6\" (UID: \"93553656-ef25-4318-81f1-a4e7f973ed38\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.047614 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.188599 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.205078 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp"] Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.447648 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.460503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.460623 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-q2tdz\" (UID: \"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.460684 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:28 crc kubenswrapper[4688]: E1125 12:29:28.460790 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 12:29:28 crc kubenswrapper[4688]: E1125 12:29:28.460830 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs podName:1364865a-3285-428d-b672-064400c43c94 nodeName:}" failed. No retries permitted until 2025-11-25 12:29:29.460816945 +0000 UTC m=+919.570445813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs") pod "openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" (UID: "1364865a-3285-428d-b672-064400c43c94") : secret "webhook-server-cert" not found Nov 25 12:29:28 crc kubenswrapper[4688]: E1125 12:29:28.461151 4688 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 12:29:28 crc kubenswrapper[4688]: E1125 12:29:28.461175 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs podName:1364865a-3285-428d-b672-064400c43c94 nodeName:}" failed. No retries permitted until 2025-11-25 12:29:29.461167944 +0000 UTC m=+919.570796802 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs") pod "openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" (UID: "1364865a-3285-428d-b672-064400c43c94") : secret "metrics-server-cert" not found Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.485507 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-q2tdz\" (UID: \"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.487553 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.534574 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.581483 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.615646 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.638218 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2"] Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.647075 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.803710 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-2snng"] Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.836845 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn"] Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.857552 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d"] Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.965365 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj"] Nov 25 12:29:28 crc kubenswrapper[4688]: I1125 12:29:28.992230 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:28 crc kubenswrapper[4688]: E1125 12:29:28.992462 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 12:29:28 crc kubenswrapper[4688]: E1125 12:29:28.992543 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert podName:7f63e16e-9d9b-4e1a-b497-1417e8e7b79e nodeName:}" failed. No retries permitted until 2025-11-25 12:29:30.992504415 +0000 UTC m=+921.102133283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" (UID: "7f63e16e-9d9b-4e1a-b497-1417e8e7b79e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.012587 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq"] Nov 25 12:29:29 crc kubenswrapper[4688]: W1125 12:29:29.127475 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78451e33_7e86_4635_ac5f_d2c6a9ae6e71.slice/crio-6f59c1bb2756e313998a9296c418119098f6c37de2340ab16bb0003a5009fb7a WatchSource:0}: Error finding container 6f59c1bb2756e313998a9296c418119098f6c37de2340ab16bb0003a5009fb7a: Status 404 returned error can't find the container with id 6f59c1bb2756e313998a9296c418119098f6c37de2340ab16bb0003a5009fb7a Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.256419 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r"] Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.340209 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc"] Nov 25 12:29:29 crc kubenswrapper[4688]: W1125 12:29:29.346762 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod808a5b9f_95a2_4f58_abe2_30758a6a7e2a.slice/crio-f89970a0339404c01779432cae94181018ec2796a34278105ec8c87d42f5144d WatchSource:0}: Error finding container f89970a0339404c01779432cae94181018ec2796a34278105ec8c87d42f5144d: Status 404 returned error can't find the container with id f89970a0339404c01779432cae94181018ec2796a34278105ec8c87d42f5144d Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.361767 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms"] Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.370933 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp"] Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.395839 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz"] Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.406574 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn"] Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.418862 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" event={"ID":"e2f91df4-3b39-4c05-9fee-dd3f7622fd13","Type":"ContainerStarted","Data":"b6e36a6781c648003e91e0f8940a6ce36e7562ff4ce36fcf11d0e1fd0ce9cfa1"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.420356 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" event={"ID":"5c7a1a6d-a3f3-4490-a6ba-f521535a1364","Type":"ContainerStarted","Data":"eaa59fde6aaf0b5b431e2fb6317d9f9c73a05cd7465438edd38146f1235eb69b"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.422053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" event={"ID":"808a5b9f-95a2-4f58-abe2-30758a6a7e2a","Type":"ContainerStarted","Data":"f89970a0339404c01779432cae94181018ec2796a34278105ec8c87d42f5144d"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.431472 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" event={"ID":"acc9de1c-caf4-40f2-8e3c-470f1059599a","Type":"ContainerStarted","Data":"870168daa3fce86ab2b0ca4d367fe942a87434653a8c59c1913e7c3ee32e83be"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.432894 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" event={"ID":"87bbdcd1-48cf-4310-9131-93dadc55a0f1","Type":"ContainerStarted","Data":"c6bd1de5bbc2eeea9ee3331d3a9ca2a70fb15fa476d5868eb6bbf58540e80a2f"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.437150 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5"] Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.439773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" event={"ID":"92794534-2689-4fde-8597-4cc766d7b3b0","Type":"ContainerStarted","Data":"e8629a1f34a5cb7b32e39e364c3cd127cc1d8d038ae963a0cbf2e0b367ca54c0"} Nov 25 12:29:29 crc kubenswrapper[4688]: W1125 12:29:29.442389 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6efa691a_9f05_4d6a_8517_cba5b00426cd.slice/crio-ed0afca2e82489b59c64611d96aa982d41bd420840f3d6e9a8a268bce5fac4d3 WatchSource:0}: Error finding container ed0afca2e82489b59c64611d96aa982d41bd420840f3d6e9a8a268bce5fac4d3: Status 404 returned error can't find the container with id ed0afca2e82489b59c64611d96aa982d41bd420840f3d6e9a8a268bce5fac4d3 Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.449964 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" event={"ID":"592ea8b1-efc4-4027-a7dc-3943125fd935","Type":"ContainerStarted","Data":"92f3671edbc30b19f8695431a8f41586a50bdbd611dd6473fd04db6b3620f6f0"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.453368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" event={"ID":"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7","Type":"ContainerStarted","Data":"ebcf11a7dc2f1b3d7154422129c0b4e35258a47a596cd8eabd57fb052ba728d9"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.454140 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" event={"ID":"94f12846-9cbe-4997-9160-3545778ecfde","Type":"ContainerStarted","Data":"52a5ab6a33e50682e9ed27e6115fb98e42eb4d996e5a60e188ae7fd32b995799"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.455927 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" event={"ID":"78451e33-7e86-4635-ac5f-d2c6a9ae6e71","Type":"ContainerStarted","Data":"6f59c1bb2756e313998a9296c418119098f6c37de2340ab16bb0003a5009fb7a"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.457427 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" event={"ID":"6efe1c76-76a3-4c72-bb71-0963553bbb98","Type":"ContainerStarted","Data":"4d88991bae93c1c6b9270b73188b429a445a3a503e123da3e6e599c6beb715ad"} Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.514982 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.515092 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.515218 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.515256 4688 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.515307 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs podName:1364865a-3285-428d-b672-064400c43c94 nodeName:}" failed. No retries permitted until 2025-11-25 12:29:31.515281165 +0000 UTC m=+921.624910053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs") pod "openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" (UID: "1364865a-3285-428d-b672-064400c43c94") : secret "webhook-server-cert" not found Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.515333 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs podName:1364865a-3285-428d-b672-064400c43c94 nodeName:}" failed. No retries permitted until 2025-11-25 12:29:31.515323517 +0000 UTC m=+921.624952465 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs") pod "openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" (UID: "1364865a-3285-428d-b672-064400c43c94") : secret "metrics-server-cert" not found Nov 25 12:29:29 crc kubenswrapper[4688]: W1125 12:29:29.555981 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d5b85ee_6a84_41bb_bc71_1d6ba38ed13d.slice/crio-dba5e009d5ceb897d510942e057ccbb33bd710f9b7f460b7e4b15af6af236dba WatchSource:0}: Error finding container dba5e009d5ceb897d510942e057ccbb33bd710f9b7f460b7e4b15af6af236dba: Status 404 returned error can't find the container with id dba5e009d5ceb897d510942e057ccbb33bd710f9b7f460b7e4b15af6af236dba Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.557673 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz"] Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.576891 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kf87c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-q2tdz_openstack-operators(0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.577395 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-gf8vv"] Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.595372 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb"] Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.600568 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z4wv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-gf8vv_openstack-operators(ae188502-8c93-4a53-bb69-b9a964c82bc6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.600654 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.148:5001/openstack-k8s-operators/telemetry-operator:28a8c1cf37b45ade24203f1ec8f593431858d288,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjcsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-c877c965-jptwb_openstack-operators(3f65195f-4002-4d44-a25c-3c2603ed14c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.602978 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjcsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-c877c965-jptwb_openstack-operators(3f65195f-4002-4d44-a25c-3c2603ed14c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.604110 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" podUID="3f65195f-4002-4d44-a25c-3c2603ed14c6" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.605671 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z4wv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-gf8vv_openstack-operators(ae188502-8c93-4a53-bb69-b9a964c82bc6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.607090 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" podUID="ae188502-8c93-4a53-bb69-b9a964c82bc6" Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.649930 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6"] Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.661150 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phtgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wf6w6_openstack-operators(93553656-ef25-4318-81f1-a4e7f973ed38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.662508 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" podUID="93553656-ef25-4318-81f1-a4e7f973ed38" Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.754153 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg"] Nov 25 12:29:29 crc kubenswrapper[4688]: W1125 12:29:29.761504 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3649a66a_709f_4b77_b798_e5f90eeb2e5d.slice/crio-1249c8a9f855216ac83fdaf6eb66cca012dc35c84f430ce65e1c8379451d982e WatchSource:0}: Error finding container 1249c8a9f855216ac83fdaf6eb66cca012dc35c84f430ce65e1c8379451d982e: Status 404 returned error can't find the container with id 1249c8a9f855216ac83fdaf6eb66cca012dc35c84f430ce65e1c8379451d982e Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.764591 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h59m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-nxnmg_openstack-operators(3649a66a-709f-4b77-b798-e5f90eeb2e5d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.767425 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h59m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-nxnmg_openstack-operators(3649a66a-709f-4b77-b798-e5f90eeb2e5d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.768514 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podUID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.828170 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8"] Nov 25 12:29:29 crc kubenswrapper[4688]: I1125 12:29:29.832423 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt"] Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.854152 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzgpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-c76gt_openstack-operators(d4c78fcc-139a-4485-8628-dc14422a4710): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.861036 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzgpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-c76gt_openstack-operators(d4c78fcc-139a-4485-8628-dc14422a4710): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 12:29:29 crc kubenswrapper[4688]: E1125 12:29:29.862266 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podUID="d4c78fcc-139a-4485-8628-dc14422a4710" Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.464472 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" event={"ID":"3f65195f-4002-4d44-a25c-3c2603ed14c6","Type":"ContainerStarted","Data":"851aebaf3426c2522f72e0c0db00d2ecc53a7eb853b640bffa2620bd34d8bfe1"} Nov 25 12:29:30 crc kubenswrapper[4688]: E1125 12:29:30.467426 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.148:5001/openstack-k8s-operators/telemetry-operator:28a8c1cf37b45ade24203f1ec8f593431858d288\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" podUID="3f65195f-4002-4d44-a25c-3c2603ed14c6" Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.468278 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" event={"ID":"fa49233e-de1b-4bea-85a6-de285e0e60f6","Type":"ContainerStarted","Data":"01f68b7f5cd4d7a8e5627f7d8a3992f2199b23923c959fad989edb6784b52eb1"} Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.469275 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" event={"ID":"3649a66a-709f-4b77-b798-e5f90eeb2e5d","Type":"ContainerStarted","Data":"1249c8a9f855216ac83fdaf6eb66cca012dc35c84f430ce65e1c8379451d982e"} Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.476149 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" event={"ID":"93553656-ef25-4318-81f1-a4e7f973ed38","Type":"ContainerStarted","Data":"4c7e74c6a2d57866636cf43e6a3889aeac0c8be908fc949b2d6c377179d616bc"} Nov 25 12:29:30 crc kubenswrapper[4688]: E1125 12:29:30.478030 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" podUID="93553656-ef25-4318-81f1-a4e7f973ed38" Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.487511 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" event={"ID":"6efa691a-9f05-4d6a-8517-cba5b00426cd","Type":"ContainerStarted","Data":"ed0afca2e82489b59c64611d96aa982d41bd420840f3d6e9a8a268bce5fac4d3"} Nov 25 12:29:30 crc kubenswrapper[4688]: E1125 12:29:30.487809 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podUID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.494761 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" event={"ID":"d4c78fcc-139a-4485-8628-dc14422a4710","Type":"ContainerStarted","Data":"ec375c2ad5038ee104893e671cc5e5552197b30536d8e2e579ce0939b993528d"} Nov 25 12:29:30 crc kubenswrapper[4688]: E1125 12:29:30.498856 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podUID="d4c78fcc-139a-4485-8628-dc14422a4710" Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.499660 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" event={"ID":"59ac66df-a38a-4193-a6ff-fd4e74b1b113","Type":"ContainerStarted","Data":"a3b85a8a1025f7993cb71fe8888503f4a0e3ffdcb8320a6c413744a77aec56d5"} Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.501324 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" event={"ID":"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d","Type":"ContainerStarted","Data":"dba5e009d5ceb897d510942e057ccbb33bd710f9b7f460b7e4b15af6af236dba"} Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.503229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" event={"ID":"ae188502-8c93-4a53-bb69-b9a964c82bc6","Type":"ContainerStarted","Data":"c9dc92c33714063aa698e450cedb5c3c00bcdb499ec180c725c8408c80d3f91c"} Nov 25 12:29:30 crc kubenswrapper[4688]: I1125 12:29:30.508928 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" event={"ID":"7cd9dc7e-be06-416a-aebe-c0b160c79697","Type":"ContainerStarted","Data":"9310d636be9bb7101ec4350c32fdc1930fef36c8e4d51039e6a6fa7c247b640b"} Nov 25 12:29:30 crc kubenswrapper[4688]: E1125 12:29:30.509933 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" podUID="ae188502-8c93-4a53-bb69-b9a964c82bc6" Nov 25 12:29:31 crc kubenswrapper[4688]: I1125 12:29:31.045271 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:31 crc kubenswrapper[4688]: I1125 12:29:31.055342 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f63e16e-9d9b-4e1a-b497-1417e8e7b79e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh\" (UID: \"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:31 crc kubenswrapper[4688]: I1125 12:29:31.324947 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:31 crc kubenswrapper[4688]: E1125 12:29:31.518725 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" podUID="93553656-ef25-4318-81f1-a4e7f973ed38" Nov 25 12:29:31 crc kubenswrapper[4688]: E1125 12:29:31.519374 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podUID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" Nov 25 12:29:31 crc kubenswrapper[4688]: E1125 12:29:31.519744 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.148:5001/openstack-k8s-operators/telemetry-operator:28a8c1cf37b45ade24203f1ec8f593431858d288\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" podUID="3f65195f-4002-4d44-a25c-3c2603ed14c6" Nov 25 12:29:31 crc kubenswrapper[4688]: E1125 12:29:31.521113 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" podUID="ae188502-8c93-4a53-bb69-b9a964c82bc6" Nov 25 12:29:31 crc kubenswrapper[4688]: E1125 12:29:31.521186 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podUID="d4c78fcc-139a-4485-8628-dc14422a4710" Nov 25 12:29:31 crc kubenswrapper[4688]: I1125 12:29:31.551171 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:31 crc kubenswrapper[4688]: I1125 12:29:31.551245 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:31 crc kubenswrapper[4688]: I1125 12:29:31.563685 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-webhook-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:31 crc kubenswrapper[4688]: I1125 12:29:31.567872 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1364865a-3285-428d-b672-064400c43c94-metrics-certs\") pod \"openstack-operator-controller-manager-6bdd9b6cb6-vgfmk\" (UID: \"1364865a-3285-428d-b672-064400c43c94\") " pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:31 crc kubenswrapper[4688]: I1125 12:29:31.822284 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:34 crc kubenswrapper[4688]: I1125 12:29:34.932307 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk"] Nov 25 12:29:34 crc kubenswrapper[4688]: I1125 12:29:34.967350 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh"] Nov 25 12:29:36 crc kubenswrapper[4688]: W1125 12:29:36.142530 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f63e16e_9d9b_4e1a_b497_1417e8e7b79e.slice/crio-4ed582f30725de3aeac8710350f3300d585dd0fee4fcc0fe41df87b9311666a0 WatchSource:0}: Error finding container 4ed582f30725de3aeac8710350f3300d585dd0fee4fcc0fe41df87b9311666a0: Status 404 returned error can't find the container with id 4ed582f30725de3aeac8710350f3300d585dd0fee4fcc0fe41df87b9311666a0 Nov 25 12:29:36 crc kubenswrapper[4688]: W1125 12:29:36.144754 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1364865a_3285_428d_b672_064400c43c94.slice/crio-c243499dcf27c45ec338b520413af63ba8d8e3c2111a31ed7746a9c463b7f15f WatchSource:0}: Error finding container c243499dcf27c45ec338b520413af63ba8d8e3c2111a31ed7746a9c463b7f15f: Status 404 returned error can't find the container with id c243499dcf27c45ec338b520413af63ba8d8e3c2111a31ed7746a9c463b7f15f Nov 25 12:29:36 crc kubenswrapper[4688]: I1125 12:29:36.564667 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" event={"ID":"1364865a-3285-428d-b672-064400c43c94","Type":"ContainerStarted","Data":"c243499dcf27c45ec338b520413af63ba8d8e3c2111a31ed7746a9c463b7f15f"} Nov 25 12:29:36 crc kubenswrapper[4688]: I1125 12:29:36.566229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" event={"ID":"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e","Type":"ContainerStarted","Data":"4ed582f30725de3aeac8710350f3300d585dd0fee4fcc0fe41df87b9311666a0"} Nov 25 12:29:40 crc kubenswrapper[4688]: E1125 12:29:40.916025 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 25 12:29:40 crc kubenswrapper[4688]: E1125 12:29:40.916757 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8d9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c9694994-zfvn2_openstack-operators(55967ae9-2dad-4d45-a8c3-bdaa483f9ea7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:29:41 crc kubenswrapper[4688]: E1125 12:29:41.719576 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 25 12:29:41 crc kubenswrapper[4688]: E1125 12:29:41.719893 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvpml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-9qfpp_openstack-operators(592ea8b1-efc4-4027-a7dc-3943125fd935): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:29:42 crc kubenswrapper[4688]: E1125 12:29:42.346429 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f" Nov 25 12:29:42 crc kubenswrapper[4688]: E1125 12:29:42.346985 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-792m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-7d695c9b56-vkj6d_openstack-operators(acc9de1c-caf4-40f2-8e3c-470f1059599a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:29:43 crc kubenswrapper[4688]: E1125 12:29:43.309700 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 25 12:29:43 crc kubenswrapper[4688]: E1125 12:29:43.309937 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9pzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-94snn_openstack-operators(7cd9dc7e-be06-416a-aebe-c0b160c79697): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:29:43 crc kubenswrapper[4688]: E1125 12:29:43.778643 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 25 12:29:43 crc kubenswrapper[4688]: E1125 12:29:43.778985 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5vsj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-774b86978c-b9jdn_openstack-operators(92794534-2689-4fde-8597-4cc766d7b3b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:29:44 crc kubenswrapper[4688]: E1125 12:29:44.253771 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7" Nov 25 12:29:44 crc kubenswrapper[4688]: E1125 12:29:44.253988 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2zxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-86dc4d89c8-q4ffj_openstack-operators(6efe1c76-76a3-4c72-bb71-0963553bbb98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:29:45 crc kubenswrapper[4688]: E1125 12:29:45.044070 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 25 12:29:45 crc kubenswrapper[4688]: E1125 12:29:45.044551 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9z9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-4zlm5_openstack-operators(6efa691a-9f05-4d6a-8517-cba5b00426cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:29:53 crc kubenswrapper[4688]: I1125 12:29:53.684440 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" event={"ID":"1364865a-3285-428d-b672-064400c43c94","Type":"ContainerStarted","Data":"51a6d1ff223cb249707f254d60930719ad6b47c08f8eab629ccdc4f705b4a95c"} Nov 25 12:29:53 crc kubenswrapper[4688]: I1125 12:29:53.685367 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:29:53 crc kubenswrapper[4688]: I1125 12:29:53.722355 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" podStartSLOduration=26.722334421 podStartE2EDuration="26.722334421s" podCreationTimestamp="2025-11-25 12:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:29:53.711382967 +0000 UTC m=+943.821011835" watchObservedRunningTime="2025-11-25 12:29:53.722334421 +0000 UTC m=+943.831963279" Nov 25 12:29:54 crc kubenswrapper[4688]: I1125 12:29:54.698433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" event={"ID":"5c7a1a6d-a3f3-4490-a6ba-f521535a1364","Type":"ContainerStarted","Data":"ad2f08198d77f4a91852dd214eeeeda7f7fc4add50b973d6e93f0c66a0227da9"} Nov 25 12:29:54 crc kubenswrapper[4688]: I1125 12:29:54.700248 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" event={"ID":"78451e33-7e86-4635-ac5f-d2c6a9ae6e71","Type":"ContainerStarted","Data":"255edb788a48d9e6dc4c6eaf16de423dfb1af513dbd467f1800503d66ccaabb4"} Nov 25 12:29:54 crc kubenswrapper[4688]: I1125 12:29:54.705360 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" event={"ID":"e2f91df4-3b39-4c05-9fee-dd3f7622fd13","Type":"ContainerStarted","Data":"8760e49b2fabf2d2fdb55cd5daa32479972f5d903e950a0d2a5c9ddb9348c30c"} Nov 25 12:29:54 crc kubenswrapper[4688]: I1125 12:29:54.707372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" event={"ID":"87bbdcd1-48cf-4310-9131-93dadc55a0f1","Type":"ContainerStarted","Data":"c713eb87f5025c5a2ded355017c9ab4f6e343b330239bd8e3851dc3d4e158e1b"} Nov 25 12:29:54 crc kubenswrapper[4688]: I1125 12:29:54.710306 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" event={"ID":"808a5b9f-95a2-4f58-abe2-30758a6a7e2a","Type":"ContainerStarted","Data":"839fedc23668dd7678a9fb69adb801aaa6ec538c909481e2cfd0db29fdd558ba"} Nov 25 12:29:55 crc kubenswrapper[4688]: I1125 12:29:55.722846 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" event={"ID":"94f12846-9cbe-4997-9160-3545778ecfde","Type":"ContainerStarted","Data":"7d3f7f759665cc6decefab783094d0ed3a82679086716d699cfccfa5c2dfcb83"} Nov 25 12:29:55 crc kubenswrapper[4688]: I1125 12:29:55.725580 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" event={"ID":"59ac66df-a38a-4193-a6ff-fd4e74b1b113","Type":"ContainerStarted","Data":"a119ab5221d2689ea1e82bcc5ba063bdd7838b476a9af0446d42847b7d840913"} Nov 25 12:29:55 crc kubenswrapper[4688]: I1125 12:29:55.727302 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" event={"ID":"fa49233e-de1b-4bea-85a6-de285e0e60f6","Type":"ContainerStarted","Data":"c3bf243bf178250c26ce4702aea298267ed145b15477e83d93d50d1fc8da3f37"} Nov 25 12:29:55 crc kubenswrapper[4688]: E1125 12:29:55.842273 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 12:29:55 crc kubenswrapper[4688]: E1125 12:29:55.842425 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kf87c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-q2tdz_openstack-operators(0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:29:55 crc kubenswrapper[4688]: E1125 12:29:55.844498 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" podUID="0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d" Nov 25 12:29:56 crc kubenswrapper[4688]: E1125 12:29:56.136496 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" podUID="592ea8b1-efc4-4027-a7dc-3943125fd935" Nov 25 12:29:56 crc kubenswrapper[4688]: E1125 12:29:56.638170 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" podUID="6efe1c76-76a3-4c72-bb71-0963553bbb98" Nov 25 12:29:56 crc kubenswrapper[4688]: E1125 12:29:56.764655 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" podUID="6efa691a-9f05-4d6a-8517-cba5b00426cd" Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.773961 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" event={"ID":"3f65195f-4002-4d44-a25c-3c2603ed14c6","Type":"ContainerStarted","Data":"ee1357dcb20670eee0f1daee0282ce8812681af9f3cc544c2cd12432649625d8"} Nov 25 12:29:56 crc kubenswrapper[4688]: E1125 12:29:56.774991 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" podUID="92794534-2689-4fde-8597-4cc766d7b3b0" Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.781997 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" event={"ID":"3649a66a-709f-4b77-b798-e5f90eeb2e5d","Type":"ContainerStarted","Data":"a37bad09623ad3855f9033dd417d87217feb5d79669ad1895e182ef4002f3c35"} Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.792890 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" event={"ID":"592ea8b1-efc4-4027-a7dc-3943125fd935","Type":"ContainerStarted","Data":"ae697942cc4cb8cc7f28a27e9f1d67129a168c23d491a91aaf7698031c44adb6"} Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.798364 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" event={"ID":"93553656-ef25-4318-81f1-a4e7f973ed38","Type":"ContainerStarted","Data":"5036a212d04e6e3526c7d54957449ee62139b0c2449e2dedcd8cfe17d0922dde"} Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.819507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" event={"ID":"d4c78fcc-139a-4485-8628-dc14422a4710","Type":"ContainerStarted","Data":"a45c96f62e19ea42b6e9a9c43b417b497a73b90ff0fd2b088377d433e4956f7e"} Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.822473 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" event={"ID":"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e","Type":"ContainerStarted","Data":"785ee7ed2f591c9c7fa427eb3bb8bc2aa28ab24f4daf32e35cf42f60fe63300f"} Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.844499 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" event={"ID":"ae188502-8c93-4a53-bb69-b9a964c82bc6","Type":"ContainerStarted","Data":"47dd0a5e4c13d85ee89c9f61e5a2017ba6807678b08260179adda75564a5b550"} Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.856115 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" event={"ID":"6efe1c76-76a3-4c72-bb71-0963553bbb98","Type":"ContainerStarted","Data":"35272c699cc59e3504f40d261da98874fb997fb10fe5c5005944c247a702aa58"} Nov 25 12:29:56 crc kubenswrapper[4688]: I1125 12:29:56.863749 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" podStartSLOduration=5.313107496 podStartE2EDuration="29.863710903s" podCreationTimestamp="2025-11-25 12:29:27 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.660986088 +0000 UTC m=+919.770614956" lastFinishedPulling="2025-11-25 12:29:54.211589495 +0000 UTC m=+944.321218363" observedRunningTime="2025-11-25 12:29:56.850745386 +0000 UTC m=+946.960374254" watchObservedRunningTime="2025-11-25 12:29:56.863710903 +0000 UTC m=+946.973339771" Nov 25 12:29:56 crc kubenswrapper[4688]: E1125 12:29:56.968805 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" podUID="55967ae9-2dad-4d45-a8c3-bdaa483f9ea7" Nov 25 12:29:57 crc kubenswrapper[4688]: E1125 12:29:57.138828 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" podUID="acc9de1c-caf4-40f2-8e3c-470f1059599a" Nov 25 12:29:57 crc kubenswrapper[4688]: E1125 12:29:57.156754 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" podUID="7cd9dc7e-be06-416a-aebe-c0b160c79697" Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.871079 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" event={"ID":"acc9de1c-caf4-40f2-8e3c-470f1059599a","Type":"ContainerStarted","Data":"c93ecf4149af1420a8538e992113f151e98adebf5fd5d9eb1f2faab42f126f08"} Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.875858 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" event={"ID":"e2f91df4-3b39-4c05-9fee-dd3f7622fd13","Type":"ContainerStarted","Data":"296b76d19606338f0f9f065340026922303d5800c7cfd656a2c93399c1256e24"} Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.875996 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.882471 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" event={"ID":"fa49233e-de1b-4bea-85a6-de285e0e60f6","Type":"ContainerStarted","Data":"f71fb79d456585ab583768525841d8c49667f00a9ae35de2b4b9386f74778e9e"} Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.893225 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" event={"ID":"808a5b9f-95a2-4f58-abe2-30758a6a7e2a","Type":"ContainerStarted","Data":"5d4bd3f5c1c0c4e8243328522719e8dc14a8fadffc3022dd17798a1fa5b06aa1"} Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.894234 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.899278 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" event={"ID":"6efa691a-9f05-4d6a-8517-cba5b00426cd","Type":"ContainerStarted","Data":"c8e2d25037cdd44bc74a181c8815839eee6b11b52ca04b70d480497d838b8412"} Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.922870 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" podStartSLOduration=5.197470478 podStartE2EDuration="31.922850968s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.406532734 +0000 UTC m=+919.516161602" lastFinishedPulling="2025-11-25 12:29:56.131913224 +0000 UTC m=+946.241542092" observedRunningTime="2025-11-25 12:29:57.92250767 +0000 UTC m=+948.032136548" watchObservedRunningTime="2025-11-25 12:29:57.922850968 +0000 UTC m=+948.032479836" Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.931009 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" event={"ID":"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e","Type":"ContainerStarted","Data":"5d72d96be544f784eece0bb2545a69889a756b0ec9e0a398b2ca2c2ad3813854"} Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.946748 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.947645 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" event={"ID":"7cd9dc7e-be06-416a-aebe-c0b160c79697","Type":"ContainerStarted","Data":"4b1ce1220e1f6faf6c6972e85925462d79be79622c86606a04d149fcd7e6320d"} Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.990353 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" event={"ID":"3649a66a-709f-4b77-b798-e5f90eeb2e5d","Type":"ContainerStarted","Data":"29a3b4bdbd058af84a1144bf280c2bde84e4756f49b0bf98533c4bf09c235ba9"} Nov 25 12:29:57 crc kubenswrapper[4688]: I1125 12:29:57.990396 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.002608 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" podStartSLOduration=4.747544047 podStartE2EDuration="32.002586774s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.303251107 +0000 UTC m=+919.412879975" lastFinishedPulling="2025-11-25 12:29:56.558293834 +0000 UTC m=+946.667922702" observedRunningTime="2025-11-25 12:29:57.960329612 +0000 UTC m=+948.069958490" watchObservedRunningTime="2025-11-25 12:29:58.002586774 +0000 UTC m=+948.112215642" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.006778 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" podStartSLOduration=14.202230459 podStartE2EDuration="32.006768136s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:36.146752906 +0000 UTC m=+926.256381774" lastFinishedPulling="2025-11-25 12:29:53.951290583 +0000 UTC m=+944.060919451" observedRunningTime="2025-11-25 12:29:58.002263236 +0000 UTC m=+948.111892104" watchObservedRunningTime="2025-11-25 12:29:58.006768136 +0000 UTC m=+948.116397004" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.007612 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" event={"ID":"87bbdcd1-48cf-4310-9131-93dadc55a0f1","Type":"ContainerStarted","Data":"bc268ba7cf0a39358b5e4c7db6e40a07b71c46205aefb19160b16184061b9091"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.008101 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.018106 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" event={"ID":"92794534-2689-4fde-8597-4cc766d7b3b0","Type":"ContainerStarted","Data":"c533311ceb98d979144959c05a119ea90344b216c9bca895d9c9d5450d4c12f1"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.027289 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" event={"ID":"5c7a1a6d-a3f3-4490-a6ba-f521535a1364","Type":"ContainerStarted","Data":"b7bafbe309ee0562b88fe0c41785ca5f46182542d205efc90cfb1f4b610fc9ae"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.028223 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.039040 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" event={"ID":"ae188502-8c93-4a53-bb69-b9a964c82bc6","Type":"ContainerStarted","Data":"0c7ffa25736bb776fd7d329299bb242cc15dc40784e11680832503c0c011879d"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.039543 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.050939 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.056048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" event={"ID":"59ac66df-a38a-4193-a6ff-fd4e74b1b113","Type":"ContainerStarted","Data":"4a32e097578afb7b63202f1ffa8ba9300a4c62db6291f4aa0bb4c3526dbefaa9"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.056735 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.079068 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" event={"ID":"3f65195f-4002-4d44-a25c-3c2603ed14c6","Type":"ContainerStarted","Data":"e2a91ddb247fb5bd37164a66b22f7f0d4189d672c5a7671c7f95b84a078b5f1e"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.079727 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.104189 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.104867 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" podStartSLOduration=5.483693463 podStartE2EDuration="32.104849263s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.358113706 +0000 UTC m=+919.467742574" lastFinishedPulling="2025-11-25 12:29:55.979269506 +0000 UTC m=+946.088898374" observedRunningTime="2025-11-25 12:29:58.097941938 +0000 UTC m=+948.207570806" watchObservedRunningTime="2025-11-25 12:29:58.104849263 +0000 UTC m=+948.214478131" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.106655 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" event={"ID":"6efe1c76-76a3-4c72-bb71-0963553bbb98","Type":"ContainerStarted","Data":"b4c9b13e4a677e9751d2d757bf43013184c9a662ac9fb84d2b850ff6a0653b3b"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.106911 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.116400 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" event={"ID":"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7","Type":"ContainerStarted","Data":"1b8cab69de0227b511950b7e4aceff6c6c7d3e5b73d0fc7b5e349439aae844c3"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.122770 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podStartSLOduration=7.90351671 podStartE2EDuration="32.122750492s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.764427408 +0000 UTC m=+919.874056276" lastFinishedPulling="2025-11-25 12:29:53.98366119 +0000 UTC m=+944.093290058" observedRunningTime="2025-11-25 12:29:58.121475668 +0000 UTC m=+948.231104536" watchObservedRunningTime="2025-11-25 12:29:58.122750492 +0000 UTC m=+948.232379360" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.138122 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" event={"ID":"d4c78fcc-139a-4485-8628-dc14422a4710","Type":"ContainerStarted","Data":"7aa8482d59d6f2aa8872fce59d5e2cfc19eb32a06847bf028d724105b72ae8ef"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.139266 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.145304 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" podStartSLOduration=4.430530537 podStartE2EDuration="32.145275426s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:28.384062029 +0000 UTC m=+918.493690897" lastFinishedPulling="2025-11-25 12:29:56.098806918 +0000 UTC m=+946.208435786" observedRunningTime="2025-11-25 12:29:58.143094998 +0000 UTC m=+948.252723886" watchObservedRunningTime="2025-11-25 12:29:58.145275426 +0000 UTC m=+948.254904294" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.147084 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" event={"ID":"94f12846-9cbe-4997-9160-3545778ecfde","Type":"ContainerStarted","Data":"ea75e399b2a62f5d32a41dd45899779220676a5b875bc9b38c4753c5f9646b8b"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.147792 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.156660 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" event={"ID":"78451e33-7e86-4635-ac5f-d2c6a9ae6e71","Type":"ContainerStarted","Data":"54c67c6de19ac5999de7a0c95b555bf9b7c578efff2055304899fa50a284fbfe"} Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.157352 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.172499 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" podStartSLOduration=6.028324051 podStartE2EDuration="32.172477005s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.846348773 +0000 UTC m=+919.955977641" lastFinishedPulling="2025-11-25 12:29:55.990501727 +0000 UTC m=+946.100130595" observedRunningTime="2025-11-25 12:29:58.172139565 +0000 UTC m=+948.281768433" watchObservedRunningTime="2025-11-25 12:29:58.172477005 +0000 UTC m=+948.282105883" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.227060 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" podStartSLOduration=5.064996601 podStartE2EDuration="32.227038316s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:28.923692732 +0000 UTC m=+919.033321600" lastFinishedPulling="2025-11-25 12:29:56.085734447 +0000 UTC m=+946.195363315" observedRunningTime="2025-11-25 12:29:58.225487673 +0000 UTC m=+948.335116531" watchObservedRunningTime="2025-11-25 12:29:58.227038316 +0000 UTC m=+948.336667184" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.263491 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" podStartSLOduration=7.622215168 podStartE2EDuration="32.263472662s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.600477627 +0000 UTC m=+919.710106495" lastFinishedPulling="2025-11-25 12:29:54.241735121 +0000 UTC m=+944.351363989" observedRunningTime="2025-11-25 12:29:58.261513459 +0000 UTC m=+948.371142327" watchObservedRunningTime="2025-11-25 12:29:58.263472662 +0000 UTC m=+948.373101530" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.284045 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" podStartSLOduration=6.674456185 podStartE2EDuration="31.284021121s" podCreationTimestamp="2025-11-25 12:29:27 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.600378535 +0000 UTC m=+919.710007403" lastFinishedPulling="2025-11-25 12:29:54.209943471 +0000 UTC m=+944.319572339" observedRunningTime="2025-11-25 12:29:58.279460679 +0000 UTC m=+948.389089547" watchObservedRunningTime="2025-11-25 12:29:58.284021121 +0000 UTC m=+948.393649989" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.308244 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" podStartSLOduration=4.4421869 podStartE2EDuration="32.30821318s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.406638406 +0000 UTC m=+919.516267274" lastFinishedPulling="2025-11-25 12:29:57.272664676 +0000 UTC m=+947.382293554" observedRunningTime="2025-11-25 12:29:58.300716949 +0000 UTC m=+948.410345817" watchObservedRunningTime="2025-11-25 12:29:58.30821318 +0000 UTC m=+948.417842048" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.344838 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podStartSLOduration=8.213945124 podStartE2EDuration="32.34480924s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.853857823 +0000 UTC m=+919.963486691" lastFinishedPulling="2025-11-25 12:29:53.984721929 +0000 UTC m=+944.094350807" observedRunningTime="2025-11-25 12:29:58.343897295 +0000 UTC m=+948.453526183" watchObservedRunningTime="2025-11-25 12:29:58.34480924 +0000 UTC m=+948.454438098" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.366371 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" podStartSLOduration=5.500075992 podStartE2EDuration="32.366335346s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.143742225 +0000 UTC m=+919.253371093" lastFinishedPulling="2025-11-25 12:29:56.010001579 +0000 UTC m=+946.119630447" observedRunningTime="2025-11-25 12:29:58.360159021 +0000 UTC m=+948.469787899" watchObservedRunningTime="2025-11-25 12:29:58.366335346 +0000 UTC m=+948.475964234" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.384181 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" podStartSLOduration=5.709271025 podStartE2EDuration="32.384153864s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.362676178 +0000 UTC m=+919.472305046" lastFinishedPulling="2025-11-25 12:29:56.037559027 +0000 UTC m=+946.147187885" observedRunningTime="2025-11-25 12:29:58.379862519 +0000 UTC m=+948.489491407" watchObservedRunningTime="2025-11-25 12:29:58.384153864 +0000 UTC m=+948.493782732" Nov 25 12:29:58 crc kubenswrapper[4688]: I1125 12:29:58.406161 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" podStartSLOduration=4.057995051 podStartE2EDuration="32.406130542s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.013750074 +0000 UTC m=+919.123378942" lastFinishedPulling="2025-11-25 12:29:57.361885565 +0000 UTC m=+947.471514433" observedRunningTime="2025-11-25 12:29:58.401184399 +0000 UTC m=+948.510813267" watchObservedRunningTime="2025-11-25 12:29:58.406130542 +0000 UTC m=+948.515759470" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.169375 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" event={"ID":"592ea8b1-efc4-4027-a7dc-3943125fd935","Type":"ContainerStarted","Data":"d8b307a13d0e6e143fadcefc02052bc1e6b00372c4269800f25041addd60f0e7"} Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.172507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" event={"ID":"6efa691a-9f05-4d6a-8517-cba5b00426cd","Type":"ContainerStarted","Data":"bec070dbf35f6a57db847bf4f9a0ca580e6451ba49f64d187e876839779b36c8"} Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.172683 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.179253 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" event={"ID":"acc9de1c-caf4-40f2-8e3c-470f1059599a","Type":"ContainerStarted","Data":"2002cdcb020df2653fd6d6a9849cea699bf658815227d356a9c66cf4d9dccb7f"} Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.179402 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.184378 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" event={"ID":"7cd9dc7e-be06-416a-aebe-c0b160c79697","Type":"ContainerStarted","Data":"47235787b9c64077add0ac3ab974e741b605a5de991b3fbe1d5abb03e9d7f3b0"} Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.184508 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.189052 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" event={"ID":"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7","Type":"ContainerStarted","Data":"4200e3efdabaefebfe09c35bcda0ad4bc5ff7b0279d52668a2e11891f5270a30"} Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.189242 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.194067 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" event={"ID":"92794534-2689-4fde-8597-4cc766d7b3b0","Type":"ContainerStarted","Data":"c4cc97c220d8c958a6b84f7534e32fcef8309b4d4f4cb67ee017cac07d771c15"} Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.198211 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.198692 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.199490 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.202243 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.202320 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.202350 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.202379 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.202409 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.201555 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" podStartSLOduration=4.285847014 podStartE2EDuration="33.201540095s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.444708066 +0000 UTC m=+919.554336934" lastFinishedPulling="2025-11-25 12:29:58.360401147 +0000 UTC m=+948.470030015" observedRunningTime="2025-11-25 12:29:59.193814737 +0000 UTC m=+949.303443595" watchObservedRunningTime="2025-11-25 12:29:59.201540095 +0000 UTC m=+949.311168963" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.218997 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" podStartSLOduration=4.241646339 podStartE2EDuration="33.218981512s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.418804251 +0000 UTC m=+919.528433119" lastFinishedPulling="2025-11-25 12:29:58.396139424 +0000 UTC m=+948.505768292" observedRunningTime="2025-11-25 12:29:59.217063201 +0000 UTC m=+949.326692069" watchObservedRunningTime="2025-11-25 12:29:59.218981512 +0000 UTC m=+949.328610380" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.264032 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" podStartSLOduration=3.757121582 podStartE2EDuration="33.264014117s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.044574219 +0000 UTC m=+919.154203087" lastFinishedPulling="2025-11-25 12:29:58.551466754 +0000 UTC m=+948.661095622" observedRunningTime="2025-11-25 12:29:59.238665169 +0000 UTC m=+949.348294037" watchObservedRunningTime="2025-11-25 12:29:59.264014117 +0000 UTC m=+949.373642985" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.265388 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" podStartSLOduration=3.414108736 podStartE2EDuration="33.265380394s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:28.779427308 +0000 UTC m=+918.889056176" lastFinishedPulling="2025-11-25 12:29:58.630698966 +0000 UTC m=+948.740327834" observedRunningTime="2025-11-25 12:29:59.262539658 +0000 UTC m=+949.372168546" watchObservedRunningTime="2025-11-25 12:29:59.265380394 +0000 UTC m=+949.375009262" Nov 25 12:29:59 crc kubenswrapper[4688]: I1125 12:29:59.489728 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" podStartSLOduration=3.941313085 podStartE2EDuration="33.489704112s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:28.999761709 +0000 UTC m=+919.109390577" lastFinishedPulling="2025-11-25 12:29:58.548152736 +0000 UTC m=+948.657781604" observedRunningTime="2025-11-25 12:29:59.482300234 +0000 UTC m=+949.591929102" watchObservedRunningTime="2025-11-25 12:29:59.489704112 +0000 UTC m=+949.599332980" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.138436 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76"] Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.139382 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.141265 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.141414 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.147544 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76"] Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.202433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" event={"ID":"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d","Type":"ContainerStarted","Data":"1f9a022e04c3ceeb2c3cfc99e96b4130738c39a5c8ae85be7d9aac79df34944a"} Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.202485 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" event={"ID":"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d","Type":"ContainerStarted","Data":"33034155b4d90219d1d212d8afd5fa2da5c48cf220b2f004e28b897256b0a898"} Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.204325 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.223209 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" podStartSLOduration=4.033056993 podStartE2EDuration="34.223191346s" podCreationTimestamp="2025-11-25 12:29:26 +0000 UTC" firstStartedPulling="2025-11-25 12:29:29.57668399 +0000 UTC m=+919.686312858" lastFinishedPulling="2025-11-25 12:29:59.766818343 +0000 UTC m=+949.876447211" observedRunningTime="2025-11-25 12:30:00.218603293 +0000 UTC m=+950.328232181" watchObservedRunningTime="2025-11-25 12:30:00.223191346 +0000 UTC m=+950.332820214" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.229048 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-secret-volume\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.229092 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-config-volume\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.229134 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbft5\" (UniqueName: \"kubernetes.io/projected/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-kube-api-access-bbft5\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.330352 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-config-volume\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.330820 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbft5\" (UniqueName: \"kubernetes.io/projected/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-kube-api-access-bbft5\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.331072 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-secret-volume\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.333422 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-config-volume\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.337190 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-secret-volume\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.352994 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbft5\" (UniqueName: \"kubernetes.io/projected/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-kube-api-access-bbft5\") pod \"collect-profiles-29401230-ljj76\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.460090 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:00 crc kubenswrapper[4688]: I1125 12:30:00.880600 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76"] Nov 25 12:30:00 crc kubenswrapper[4688]: W1125 12:30:00.888079 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a7f19c8_5ce9_477f_8930_c3ea89fb14c1.slice/crio-c93ff5b0d144a7d032553043972fe401249b692010088d04d5dc01119ca5069c WatchSource:0}: Error finding container c93ff5b0d144a7d032553043972fe401249b692010088d04d5dc01119ca5069c: Status 404 returned error can't find the container with id c93ff5b0d144a7d032553043972fe401249b692010088d04d5dc01119ca5069c Nov 25 12:30:01 crc kubenswrapper[4688]: I1125 12:30:01.212366 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" event={"ID":"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1","Type":"ContainerStarted","Data":"1a835df637867d072881ca33bf73711a3c5454686470355cf1c075edba97c84d"} Nov 25 12:30:01 crc kubenswrapper[4688]: I1125 12:30:01.212421 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" event={"ID":"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1","Type":"ContainerStarted","Data":"c93ff5b0d144a7d032553043972fe401249b692010088d04d5dc01119ca5069c"} Nov 25 12:30:01 crc kubenswrapper[4688]: I1125 12:30:01.233116 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" podStartSLOduration=1.233096844 podStartE2EDuration="1.233096844s" podCreationTimestamp="2025-11-25 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:30:01.228889191 +0000 UTC m=+951.338518059" watchObservedRunningTime="2025-11-25 12:30:01.233096844 +0000 UTC m=+951.342725712" Nov 25 12:30:01 crc kubenswrapper[4688]: I1125 12:30:01.333100 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:30:01 crc kubenswrapper[4688]: I1125 12:30:01.830548 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:30:02 crc kubenswrapper[4688]: I1125 12:30:02.222403 4688 generic.go:334] "Generic (PLEG): container finished" podID="6a7f19c8-5ce9-477f-8930-c3ea89fb14c1" containerID="1a835df637867d072881ca33bf73711a3c5454686470355cf1c075edba97c84d" exitCode=0 Nov 25 12:30:02 crc kubenswrapper[4688]: I1125 12:30:02.222483 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" event={"ID":"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1","Type":"ContainerDied","Data":"1a835df637867d072881ca33bf73711a3c5454686470355cf1c075edba97c84d"} Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.512182 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.577643 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a7f19c8-5ce9-477f-8930-c3ea89fb14c1" (UID: "6a7f19c8-5ce9-477f-8930-c3ea89fb14c1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.576171 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-config-volume\") pod \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.578498 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-secret-volume\") pod \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.578685 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbft5\" (UniqueName: \"kubernetes.io/projected/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-kube-api-access-bbft5\") pod \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\" (UID: \"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1\") " Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.579130 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.591185 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a7f19c8-5ce9-477f-8930-c3ea89fb14c1" (UID: "6a7f19c8-5ce9-477f-8930-c3ea89fb14c1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.591382 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-kube-api-access-bbft5" (OuterVolumeSpecName: "kube-api-access-bbft5") pod "6a7f19c8-5ce9-477f-8930-c3ea89fb14c1" (UID: "6a7f19c8-5ce9-477f-8930-c3ea89fb14c1"). InnerVolumeSpecName "kube-api-access-bbft5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.681635 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbft5\" (UniqueName: \"kubernetes.io/projected/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-kube-api-access-bbft5\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:03 crc kubenswrapper[4688]: I1125 12:30:03.681690 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:04 crc kubenswrapper[4688]: I1125 12:30:04.248192 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" event={"ID":"6a7f19c8-5ce9-477f-8930-c3ea89fb14c1","Type":"ContainerDied","Data":"c93ff5b0d144a7d032553043972fe401249b692010088d04d5dc01119ca5069c"} Nov 25 12:30:04 crc kubenswrapper[4688]: I1125 12:30:04.248240 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c93ff5b0d144a7d032553043972fe401249b692010088d04d5dc01119ca5069c" Nov 25 12:30:04 crc kubenswrapper[4688]: I1125 12:30:04.248261 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76" Nov 25 12:30:06 crc kubenswrapper[4688]: I1125 12:30:06.750373 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:30:06 crc kubenswrapper[4688]: I1125 12:30:06.788483 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:30:06 crc kubenswrapper[4688]: I1125 12:30:06.973739 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:30:07 crc kubenswrapper[4688]: I1125 12:30:07.263588 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:30:07 crc kubenswrapper[4688]: I1125 12:30:07.496567 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:30:07 crc kubenswrapper[4688]: I1125 12:30:07.530188 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:30:07 crc kubenswrapper[4688]: I1125 12:30:07.576644 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:30:08 crc kubenswrapper[4688]: I1125 12:30:08.191773 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:30:08 crc kubenswrapper[4688]: I1125 12:30:08.451863 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:30:08 crc kubenswrapper[4688]: I1125 12:30:08.488948 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:30:08 crc kubenswrapper[4688]: I1125 12:30:08.494448 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:30:08 crc kubenswrapper[4688]: I1125 12:30:08.537749 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:30:08 crc kubenswrapper[4688]: I1125 12:30:08.584685 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:30:17 crc kubenswrapper[4688]: I1125 12:30:17.854141 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:30:17 crc kubenswrapper[4688]: I1125 12:30:17.854735 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.536142 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rtz82"] Nov 25 12:30:24 crc kubenswrapper[4688]: E1125 12:30:24.537337 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7f19c8-5ce9-477f-8930-c3ea89fb14c1" containerName="collect-profiles" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.537361 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7f19c8-5ce9-477f-8930-c3ea89fb14c1" containerName="collect-profiles" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.537551 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7f19c8-5ce9-477f-8930-c3ea89fb14c1" containerName="collect-profiles" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.538705 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.541377 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.541864 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7rt7j" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.542188 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.544412 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.560271 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rtz82"] Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.647299 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7gmz"] Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.655493 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.658360 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7gmz"] Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.664365 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.695212 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b70371-6040-43bd-97bb-6c452af0d2dc-config\") pod \"dnsmasq-dns-675f4bcbfc-rtz82\" (UID: \"10b70371-6040-43bd-97bb-6c452af0d2dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.695273 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjc7p\" (UniqueName: \"kubernetes.io/projected/10b70371-6040-43bd-97bb-6c452af0d2dc-kube-api-access-kjc7p\") pod \"dnsmasq-dns-675f4bcbfc-rtz82\" (UID: \"10b70371-6040-43bd-97bb-6c452af0d2dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.796944 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjc7p\" (UniqueName: \"kubernetes.io/projected/10b70371-6040-43bd-97bb-6c452af0d2dc-kube-api-access-kjc7p\") pod \"dnsmasq-dns-675f4bcbfc-rtz82\" (UID: \"10b70371-6040-43bd-97bb-6c452af0d2dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.796998 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-config\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.797032 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj45v\" (UniqueName: \"kubernetes.io/projected/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-kube-api-access-pj45v\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.797145 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.797170 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b70371-6040-43bd-97bb-6c452af0d2dc-config\") pod \"dnsmasq-dns-675f4bcbfc-rtz82\" (UID: \"10b70371-6040-43bd-97bb-6c452af0d2dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.797953 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b70371-6040-43bd-97bb-6c452af0d2dc-config\") pod \"dnsmasq-dns-675f4bcbfc-rtz82\" (UID: \"10b70371-6040-43bd-97bb-6c452af0d2dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.815460 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjc7p\" (UniqueName: \"kubernetes.io/projected/10b70371-6040-43bd-97bb-6c452af0d2dc-kube-api-access-kjc7p\") pod \"dnsmasq-dns-675f4bcbfc-rtz82\" (UID: \"10b70371-6040-43bd-97bb-6c452af0d2dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.856432 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.898238 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.898378 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-config\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.898466 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj45v\" (UniqueName: \"kubernetes.io/projected/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-kube-api-access-pj45v\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.899467 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.900055 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-config\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.925031 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj45v\" (UniqueName: \"kubernetes.io/projected/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-kube-api-access-pj45v\") pod \"dnsmasq-dns-78dd6ddcc-g7gmz\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:24 crc kubenswrapper[4688]: I1125 12:30:24.972115 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:25 crc kubenswrapper[4688]: I1125 12:30:25.254039 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7gmz"] Nov 25 12:30:25 crc kubenswrapper[4688]: I1125 12:30:25.262799 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:30:25 crc kubenswrapper[4688]: I1125 12:30:25.328510 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rtz82"] Nov 25 12:30:25 crc kubenswrapper[4688]: W1125 12:30:25.334189 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10b70371_6040_43bd_97bb_6c452af0d2dc.slice/crio-7be1f4167b643eace9a4f25edf7ef4b1eeec56ef2b036f2bd71478d6bbe03739 WatchSource:0}: Error finding container 7be1f4167b643eace9a4f25edf7ef4b1eeec56ef2b036f2bd71478d6bbe03739: Status 404 returned error can't find the container with id 7be1f4167b643eace9a4f25edf7ef4b1eeec56ef2b036f2bd71478d6bbe03739 Nov 25 12:30:25 crc kubenswrapper[4688]: I1125 12:30:25.387272 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" event={"ID":"10b70371-6040-43bd-97bb-6c452af0d2dc","Type":"ContainerStarted","Data":"7be1f4167b643eace9a4f25edf7ef4b1eeec56ef2b036f2bd71478d6bbe03739"} Nov 25 12:30:25 crc kubenswrapper[4688]: I1125 12:30:25.388233 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" event={"ID":"60a719e9-0e80-4db3-9b7f-4cf606cec2b1","Type":"ContainerStarted","Data":"8a2db7480fcfe14c8fa11cee5beb86f8808e493549c85bab7bc872b2672e663c"} Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.683567 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rtz82"] Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.724877 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m6rnn"] Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.726368 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.746424 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m6rnn"] Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.843284 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g8nv\" (UniqueName: \"kubernetes.io/projected/d494f523-bb07-44d5-82fc-6c3c3a8432a3-kube-api-access-8g8nv\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.843486 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.843597 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-config\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.947476 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.947890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-config\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.947957 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g8nv\" (UniqueName: \"kubernetes.io/projected/d494f523-bb07-44d5-82fc-6c3c3a8432a3-kube-api-access-8g8nv\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.949032 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-config\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.951448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.978323 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g8nv\" (UniqueName: \"kubernetes.io/projected/d494f523-bb07-44d5-82fc-6c3c3a8432a3-kube-api-access-8g8nv\") pod \"dnsmasq-dns-666b6646f7-m6rnn\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:27 crc kubenswrapper[4688]: I1125 12:30:27.983333 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7gmz"] Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.029868 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-58nv4"] Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.031371 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.050361 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-58nv4"] Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.074921 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.151899 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.152085 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54vpr\" (UniqueName: \"kubernetes.io/projected/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-kube-api-access-54vpr\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.152263 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-config\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.253428 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.253536 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54vpr\" (UniqueName: \"kubernetes.io/projected/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-kube-api-access-54vpr\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.253586 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-config\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.254343 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.254422 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-config\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.276641 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54vpr\" (UniqueName: \"kubernetes.io/projected/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-kube-api-access-54vpr\") pod \"dnsmasq-dns-57d769cc4f-58nv4\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.362817 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.690101 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m6rnn"] Nov 25 12:30:28 crc kubenswrapper[4688]: W1125 12:30:28.712910 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd494f523_bb07_44d5_82fc_6c3c3a8432a3.slice/crio-92d4085bc317a12d4190d4fa964b119c9525accfc42a3356efcdff8748b68e3f WatchSource:0}: Error finding container 92d4085bc317a12d4190d4fa964b119c9525accfc42a3356efcdff8748b68e3f: Status 404 returned error can't find the container with id 92d4085bc317a12d4190d4fa964b119c9525accfc42a3356efcdff8748b68e3f Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.874432 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.876296 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.883099 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.883319 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.883443 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.883708 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.883751 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.883713 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jhscs" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.884243 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.889868 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.932847 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-58nv4"] Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.966661 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.966725 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.966753 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.966806 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.966836 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plc57\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-kube-api-access-plc57\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.966892 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.966915 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.967017 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.967121 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.967144 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:28 crc kubenswrapper[4688]: I1125 12:30:28.967166 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.068326 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.068393 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.068444 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.068473 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plc57\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-kube-api-access-plc57\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.068514 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.068554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.068578 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.068605 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.069110 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.069136 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.069171 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.069834 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.069961 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.070387 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.070389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.070487 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.070857 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.075810 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.090986 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.092044 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.099856 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.100757 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plc57\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-kube-api-access-plc57\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.105889 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.171199 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.172452 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.175251 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jlqvn" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.175557 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.175739 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.175779 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.178152 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.183065 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.183565 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.188030 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.219503 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273554 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273602 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvjg\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-kube-api-access-rnvjg\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273621 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273641 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbef45ff-afce-462a-8835-30339db0f5a0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273669 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbef45ff-afce-462a-8835-30339db0f5a0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273721 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273744 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273869 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273923 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.273978 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.378955 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379367 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379395 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379417 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379434 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379459 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379558 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379583 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvjg\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-kube-api-access-rnvjg\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379602 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379599 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379620 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbef45ff-afce-462a-8835-30339db0f5a0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.379699 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbef45ff-afce-462a-8835-30339db0f5a0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.380899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.381242 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.381269 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.381433 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.381764 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.384545 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.398340 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbef45ff-afce-462a-8835-30339db0f5a0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.401135 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.402259 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbef45ff-afce-462a-8835-30339db0f5a0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.405761 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvjg\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-kube-api-access-rnvjg\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.419829 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.459282 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" event={"ID":"b9f11a2a-8da1-43f5-93fa-9d383e1351ef","Type":"ContainerStarted","Data":"33bee5ffb6c3ddfe79c0f9c8f7b9b5c35f0e6b9b4ac8b5724dd9d5d31a29f9c6"} Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.462176 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" event={"ID":"d494f523-bb07-44d5-82fc-6c3c3a8432a3","Type":"ContainerStarted","Data":"92d4085bc317a12d4190d4fa964b119c9525accfc42a3356efcdff8748b68e3f"} Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.500150 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:30:29 crc kubenswrapper[4688]: I1125 12:30:29.681347 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.804263 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.806561 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.810031 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9l96t" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.810269 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.810606 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.811558 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.825490 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.827555 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.904648 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3c9c32d1-459d-4c35-8cf3-876542a657e9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.904767 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9c32d1-459d-4c35-8cf3-876542a657e9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.904845 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.904873 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-kolla-config\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.904903 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-config-data-default\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.904944 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lll8r\" (UniqueName: \"kubernetes.io/projected/3c9c32d1-459d-4c35-8cf3-876542a657e9-kube-api-access-lll8r\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.905240 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:30 crc kubenswrapper[4688]: I1125 12:30:30.905687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9c32d1-459d-4c35-8cf3-876542a657e9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.007279 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9c32d1-459d-4c35-8cf3-876542a657e9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.008272 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.008303 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-kolla-config\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.008322 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-config-data-default\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.008367 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lll8r\" (UniqueName: \"kubernetes.io/projected/3c9c32d1-459d-4c35-8cf3-876542a657e9-kube-api-access-lll8r\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.008405 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.008469 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9c32d1-459d-4c35-8cf3-876542a657e9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.008503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3c9c32d1-459d-4c35-8cf3-876542a657e9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.009118 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.009448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3c9c32d1-459d-4c35-8cf3-876542a657e9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.009945 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-kolla-config\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.010036 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-config-data-default\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.011611 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c9c32d1-459d-4c35-8cf3-876542a657e9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.015091 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9c32d1-459d-4c35-8cf3-876542a657e9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.024308 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9c32d1-459d-4c35-8cf3-876542a657e9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.027016 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lll8r\" (UniqueName: \"kubernetes.io/projected/3c9c32d1-459d-4c35-8cf3-876542a657e9-kube-api-access-lll8r\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.032355 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"3c9c32d1-459d-4c35-8cf3-876542a657e9\") " pod="openstack/openstack-galera-0" Nov 25 12:30:31 crc kubenswrapper[4688]: I1125 12:30:31.132461 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.199668 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.218996 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.223295 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-v9r77" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.224467 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.229608 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.232965 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.258270 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.333154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666dbc1a-fbdf-4ff1-b949-926ea3e70472-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.333237 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/666dbc1a-fbdf-4ff1-b949-926ea3e70472-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.333272 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.333322 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/666dbc1a-fbdf-4ff1-b949-926ea3e70472-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.333347 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.333376 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42fj6\" (UniqueName: \"kubernetes.io/projected/666dbc1a-fbdf-4ff1-b949-926ea3e70472-kube-api-access-42fj6\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.333428 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.333461 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.434614 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.434703 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666dbc1a-fbdf-4ff1-b949-926ea3e70472-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.434750 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/666dbc1a-fbdf-4ff1-b949-926ea3e70472-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.434779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.434825 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/666dbc1a-fbdf-4ff1-b949-926ea3e70472-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.434846 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.434874 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42fj6\" (UniqueName: \"kubernetes.io/projected/666dbc1a-fbdf-4ff1-b949-926ea3e70472-kube-api-access-42fj6\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.434924 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.435410 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.436206 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.438794 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.439793 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/666dbc1a-fbdf-4ff1-b949-926ea3e70472-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.441325 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/666dbc1a-fbdf-4ff1-b949-926ea3e70472-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.444350 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666dbc1a-fbdf-4ff1-b949-926ea3e70472-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.445999 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/666dbc1a-fbdf-4ff1-b949-926ea3e70472-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.459499 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.461630 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42fj6\" (UniqueName: \"kubernetes.io/projected/666dbc1a-fbdf-4ff1-b949-926ea3e70472-kube-api-access-42fj6\") pod \"openstack-cell1-galera-0\" (UID: \"666dbc1a-fbdf-4ff1-b949-926ea3e70472\") " pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.480128 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.481129 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.484243 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rpj24" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.484484 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.484798 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.498141 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.536618 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/412ee2a8-6c40-4142-8e09-05f4c22862c0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.536711 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqlkt\" (UniqueName: \"kubernetes.io/projected/412ee2a8-6c40-4142-8e09-05f4c22862c0-kube-api-access-fqlkt\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.536753 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412ee2a8-6c40-4142-8e09-05f4c22862c0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.536821 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/412ee2a8-6c40-4142-8e09-05f4c22862c0-kolla-config\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.536884 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/412ee2a8-6c40-4142-8e09-05f4c22862c0-config-data\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.561687 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.639118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/412ee2a8-6c40-4142-8e09-05f4c22862c0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.639203 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqlkt\" (UniqueName: \"kubernetes.io/projected/412ee2a8-6c40-4142-8e09-05f4c22862c0-kube-api-access-fqlkt\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.639231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412ee2a8-6c40-4142-8e09-05f4c22862c0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.639296 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/412ee2a8-6c40-4142-8e09-05f4c22862c0-kolla-config\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.639376 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/412ee2a8-6c40-4142-8e09-05f4c22862c0-config-data\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.643270 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/412ee2a8-6c40-4142-8e09-05f4c22862c0-config-data\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.647768 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/412ee2a8-6c40-4142-8e09-05f4c22862c0-kolla-config\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.649143 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412ee2a8-6c40-4142-8e09-05f4c22862c0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.658439 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqlkt\" (UniqueName: \"kubernetes.io/projected/412ee2a8-6c40-4142-8e09-05f4c22862c0-kube-api-access-fqlkt\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.658499 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/412ee2a8-6c40-4142-8e09-05f4c22862c0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"412ee2a8-6c40-4142-8e09-05f4c22862c0\") " pod="openstack/memcached-0" Nov 25 12:30:32 crc kubenswrapper[4688]: I1125 12:30:32.839255 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 12:30:33 crc kubenswrapper[4688]: W1125 12:30:33.767213 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbbbfdc1_1c5b_4d8a_bffb_b7b1c0ed0b4e.slice/crio-1c4e69df94b838daa19aa7464cc5c53a4f5f98999787ff7d701b3c965e5b6e10 WatchSource:0}: Error finding container 1c4e69df94b838daa19aa7464cc5c53a4f5f98999787ff7d701b3c965e5b6e10: Status 404 returned error can't find the container with id 1c4e69df94b838daa19aa7464cc5c53a4f5f98999787ff7d701b3c965e5b6e10 Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.245409 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.246395 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.253325 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-psxz2" Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.264700 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.265895 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zf89\" (UniqueName: \"kubernetes.io/projected/91c71377-dafd-4693-9408-7c0ec206490e-kube-api-access-9zf89\") pod \"kube-state-metrics-0\" (UID: \"91c71377-dafd-4693-9408-7c0ec206490e\") " pod="openstack/kube-state-metrics-0" Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.367159 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zf89\" (UniqueName: \"kubernetes.io/projected/91c71377-dafd-4693-9408-7c0ec206490e-kube-api-access-9zf89\") pod \"kube-state-metrics-0\" (UID: \"91c71377-dafd-4693-9408-7c0ec206490e\") " pod="openstack/kube-state-metrics-0" Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.388057 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zf89\" (UniqueName: \"kubernetes.io/projected/91c71377-dafd-4693-9408-7c0ec206490e-kube-api-access-9zf89\") pod \"kube-state-metrics-0\" (UID: \"91c71377-dafd-4693-9408-7c0ec206490e\") " pod="openstack/kube-state-metrics-0" Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.506966 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e","Type":"ContainerStarted","Data":"1c4e69df94b838daa19aa7464cc5c53a4f5f98999787ff7d701b3c965e5b6e10"} Nov 25 12:30:34 crc kubenswrapper[4688]: I1125 12:30:34.573174 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.591317 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-92hwc"] Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.592645 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.598459 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.598466 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.599006 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fzrzv" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.607097 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-92hwc"] Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.620229 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-nndbf"] Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.623803 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.627891 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74kvz\" (UniqueName: \"kubernetes.io/projected/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-kube-api-access-74kvz\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.627955 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-log-ovn\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.628017 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-run\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.628036 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-run-ovn\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.628063 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-combined-ca-bundle\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.628089 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-ovn-controller-tls-certs\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.628131 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-scripts\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.635490 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-nndbf"] Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729186 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-combined-ca-bundle\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-ovn-controller-tls-certs\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729319 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-log\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729365 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-run\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729382 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-scripts\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729456 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74kvz\" (UniqueName: \"kubernetes.io/projected/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-kube-api-access-74kvz\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729492 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-etc-ovs\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729552 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-log-ovn\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729586 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6f000f3-dc04-44a4-b019-d41633753240-scripts\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729702 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-lib\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729752 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gdp\" (UniqueName: \"kubernetes.io/projected/d6f000f3-dc04-44a4-b019-d41633753240-kube-api-access-f5gdp\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729829 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-run\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.729851 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-run-ovn\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.730139 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-log-ovn\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.730167 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-run-ovn\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.730240 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-var-run\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.731416 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-scripts\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.739540 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-combined-ca-bundle\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.740870 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-ovn-controller-tls-certs\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.757110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74kvz\" (UniqueName: \"kubernetes.io/projected/4ac126ff-ac63-4d6a-b201-e6dbd8ba3153-kube-api-access-74kvz\") pod \"ovn-controller-92hwc\" (UID: \"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153\") " pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.831652 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-lib\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.831700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5gdp\" (UniqueName: \"kubernetes.io/projected/d6f000f3-dc04-44a4-b019-d41633753240-kube-api-access-f5gdp\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.831760 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-log\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.831823 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-run\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.831865 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-etc-ovs\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.831906 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6f000f3-dc04-44a4-b019-d41633753240-scripts\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.832658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-run\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.832782 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-etc-ovs\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.833163 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-lib\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.833253 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d6f000f3-dc04-44a4-b019-d41633753240-var-log\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.835735 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6f000f3-dc04-44a4-b019-d41633753240-scripts\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.850171 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5gdp\" (UniqueName: \"kubernetes.io/projected/d6f000f3-dc04-44a4-b019-d41633753240-kube-api-access-f5gdp\") pod \"ovn-controller-ovs-nndbf\" (UID: \"d6f000f3-dc04-44a4-b019-d41633753240\") " pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.926731 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-92hwc" Nov 25 12:30:37 crc kubenswrapper[4688]: I1125 12:30:37.964131 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.497366 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.499314 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.502983 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7lb7g" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.503011 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.502990 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.503099 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.503255 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.513499 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.593065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33b0d963-d13d-4b40-b458-b85ec4f10131-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.593233 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.593298 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.593335 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.593382 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7zp\" (UniqueName: \"kubernetes.io/projected/33b0d963-d13d-4b40-b458-b85ec4f10131-kube-api-access-wl7zp\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.593410 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b0d963-d13d-4b40-b458-b85ec4f10131-config\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.593445 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33b0d963-d13d-4b40-b458-b85ec4f10131-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.593474 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700135 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7zp\" (UniqueName: \"kubernetes.io/projected/33b0d963-d13d-4b40-b458-b85ec4f10131-kube-api-access-wl7zp\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b0d963-d13d-4b40-b458-b85ec4f10131-config\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700270 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33b0d963-d13d-4b40-b458-b85ec4f10131-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700304 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700345 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33b0d963-d13d-4b40-b458-b85ec4f10131-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700391 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700438 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700672 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.700813 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/33b0d963-d13d-4b40-b458-b85ec4f10131-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.701875 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b0d963-d13d-4b40-b458-b85ec4f10131-config\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.701938 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33b0d963-d13d-4b40-b458-b85ec4f10131-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.714570 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.714640 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.717221 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/33b0d963-d13d-4b40-b458-b85ec4f10131-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.718415 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7zp\" (UniqueName: \"kubernetes.io/projected/33b0d963-d13d-4b40-b458-b85ec4f10131-kube-api-access-wl7zp\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.726027 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"33b0d963-d13d-4b40-b458-b85ec4f10131\") " pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:39 crc kubenswrapper[4688]: I1125 12:30:39.827762 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.321954 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.323603 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.325292 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2lv7t" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.326514 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.327041 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.330450 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.341020 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.444422 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbvx\" (UniqueName: \"kubernetes.io/projected/3644aadc-3c20-41f5-8969-f84b941eef27-kube-api-access-9vbvx\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.444490 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.444561 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.444597 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.444635 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3644aadc-3c20-41f5-8969-f84b941eef27-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.444660 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3644aadc-3c20-41f5-8969-f84b941eef27-config\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.444695 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3644aadc-3c20-41f5-8969-f84b941eef27-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.444774 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.546764 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3644aadc-3c20-41f5-8969-f84b941eef27-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.546842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.546938 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbvx\" (UniqueName: \"kubernetes.io/projected/3644aadc-3c20-41f5-8969-f84b941eef27-kube-api-access-9vbvx\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.546986 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.547025 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.547055 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.547090 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3644aadc-3c20-41f5-8969-f84b941eef27-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.547115 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3644aadc-3c20-41f5-8969-f84b941eef27-config\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.547357 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3644aadc-3c20-41f5-8969-f84b941eef27-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.547718 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.548083 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3644aadc-3c20-41f5-8969-f84b941eef27-config\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.548725 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3644aadc-3c20-41f5-8969-f84b941eef27-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.554265 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.554342 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.554873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3644aadc-3c20-41f5-8969-f84b941eef27-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.565747 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbvx\" (UniqueName: \"kubernetes.io/projected/3644aadc-3c20-41f5-8969-f84b941eef27-kube-api-access-9vbvx\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.571807 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3644aadc-3c20-41f5-8969-f84b941eef27\") " pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:42 crc kubenswrapper[4688]: I1125 12:30:42.651216 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 12:30:43 crc kubenswrapper[4688]: E1125 12:30:43.266706 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 12:30:43 crc kubenswrapper[4688]: E1125 12:30:43.266905 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj45v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-g7gmz_openstack(60a719e9-0e80-4db3-9b7f-4cf606cec2b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:30:43 crc kubenswrapper[4688]: E1125 12:30:43.268115 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" podUID="60a719e9-0e80-4db3-9b7f-4cf606cec2b1" Nov 25 12:30:43 crc kubenswrapper[4688]: E1125 12:30:43.328748 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 12:30:43 crc kubenswrapper[4688]: E1125 12:30:43.328895 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjc7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-rtz82_openstack(10b70371-6040-43bd-97bb-6c452af0d2dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:30:43 crc kubenswrapper[4688]: E1125 12:30:43.330052 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" podUID="10b70371-6040-43bd-97bb-6c452af0d2dc" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.593124 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" event={"ID":"60a719e9-0e80-4db3-9b7f-4cf606cec2b1","Type":"ContainerDied","Data":"8a2db7480fcfe14c8fa11cee5beb86f8808e493549c85bab7bc872b2672e663c"} Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.593597 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a2db7480fcfe14c8fa11cee5beb86f8808e493549c85bab7bc872b2672e663c" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.595589 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" event={"ID":"10b70371-6040-43bd-97bb-6c452af0d2dc","Type":"ContainerDied","Data":"7be1f4167b643eace9a4f25edf7ef4b1eeec56ef2b036f2bd71478d6bbe03739"} Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.595671 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be1f4167b643eace9a4f25edf7ef4b1eeec56ef2b036f2bd71478d6bbe03739" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.776950 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.852215 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.885090 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjc7p\" (UniqueName: \"kubernetes.io/projected/10b70371-6040-43bd-97bb-6c452af0d2dc-kube-api-access-kjc7p\") pod \"10b70371-6040-43bd-97bb-6c452af0d2dc\" (UID: \"10b70371-6040-43bd-97bb-6c452af0d2dc\") " Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.885238 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b70371-6040-43bd-97bb-6c452af0d2dc-config\") pod \"10b70371-6040-43bd-97bb-6c452af0d2dc\" (UID: \"10b70371-6040-43bd-97bb-6c452af0d2dc\") " Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.887542 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b70371-6040-43bd-97bb-6c452af0d2dc-config" (OuterVolumeSpecName: "config") pod "10b70371-6040-43bd-97bb-6c452af0d2dc" (UID: "10b70371-6040-43bd-97bb-6c452af0d2dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.896293 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b70371-6040-43bd-97bb-6c452af0d2dc-kube-api-access-kjc7p" (OuterVolumeSpecName: "kube-api-access-kjc7p") pod "10b70371-6040-43bd-97bb-6c452af0d2dc" (UID: "10b70371-6040-43bd-97bb-6c452af0d2dc"). InnerVolumeSpecName "kube-api-access-kjc7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.990258 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj45v\" (UniqueName: \"kubernetes.io/projected/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-kube-api-access-pj45v\") pod \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.990394 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-config\") pod \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.990689 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-dns-svc\") pod \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\" (UID: \"60a719e9-0e80-4db3-9b7f-4cf606cec2b1\") " Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.990916 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-config" (OuterVolumeSpecName: "config") pod "60a719e9-0e80-4db3-9b7f-4cf606cec2b1" (UID: "60a719e9-0e80-4db3-9b7f-4cf606cec2b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.991199 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b70371-6040-43bd-97bb-6c452af0d2dc-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.991225 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.991243 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjc7p\" (UniqueName: \"kubernetes.io/projected/10b70371-6040-43bd-97bb-6c452af0d2dc-kube-api-access-kjc7p\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.991617 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60a719e9-0e80-4db3-9b7f-4cf606cec2b1" (UID: "60a719e9-0e80-4db3-9b7f-4cf606cec2b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:44 crc kubenswrapper[4688]: I1125 12:30:44.996729 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-kube-api-access-pj45v" (OuterVolumeSpecName: "kube-api-access-pj45v") pod "60a719e9-0e80-4db3-9b7f-4cf606cec2b1" (UID: "60a719e9-0e80-4db3-9b7f-4cf606cec2b1"). InnerVolumeSpecName "kube-api-access-pj45v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.085376 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:30:45 crc kubenswrapper[4688]: W1125 12:30:45.091961 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c9c32d1_459d_4c35_8cf3_876542a657e9.slice/crio-53e63780a4349c2b5260d7e34106ae54305f2f823c0e5ce199e4b590190b6988 WatchSource:0}: Error finding container 53e63780a4349c2b5260d7e34106ae54305f2f823c0e5ce199e4b590190b6988: Status 404 returned error can't find the container with id 53e63780a4349c2b5260d7e34106ae54305f2f823c0e5ce199e4b590190b6988 Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.092592 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj45v\" (UniqueName: \"kubernetes.io/projected/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-kube-api-access-pj45v\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.092641 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a719e9-0e80-4db3-9b7f-4cf606cec2b1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.094055 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.100993 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.266777 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-92hwc"] Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.275478 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.303999 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.388280 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 12:30:45 crc kubenswrapper[4688]: W1125 12:30:45.449875 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b0d963_d13d_4b40_b458_b85ec4f10131.slice/crio-590c4de9bf3cd81303bdd016ff131111dec51de25978061a30b3674186063d8e WatchSource:0}: Error finding container 590c4de9bf3cd81303bdd016ff131111dec51de25978061a30b3674186063d8e: Status 404 returned error can't find the container with id 590c4de9bf3cd81303bdd016ff131111dec51de25978061a30b3674186063d8e Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.568998 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-nndbf"] Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.602773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"666dbc1a-fbdf-4ff1-b949-926ea3e70472","Type":"ContainerStarted","Data":"8dd020e8815279926453ffada80745c087787868ca44316eaf0632a117357176"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.604856 4688 generic.go:334] "Generic (PLEG): container finished" podID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" containerID="e6d03bd6f7034a9cdd8686de54bb8aeea11819828b1d43a57223fcc543e23538" exitCode=0 Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.604927 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" event={"ID":"b9f11a2a-8da1-43f5-93fa-9d383e1351ef","Type":"ContainerDied","Data":"e6d03bd6f7034a9cdd8686de54bb8aeea11819828b1d43a57223fcc543e23538"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.606300 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"91c71377-dafd-4693-9408-7c0ec206490e","Type":"ContainerStarted","Data":"7f40b20b839c6bc8874b0f5657b2987cd1127e12c2f8a9b68e838af94409857c"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.611008 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-92hwc" event={"ID":"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153","Type":"ContainerStarted","Data":"edd9db98a5e22caf7bf1a605cbfb6779caabe11e6e4b10c9989e50b681377481"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.612698 4688 generic.go:334] "Generic (PLEG): container finished" podID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" containerID="9e59288207854914f0afb9af8d71594634b183a423870348e113f04bd183ea9c" exitCode=0 Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.612774 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" event={"ID":"d494f523-bb07-44d5-82fc-6c3c3a8432a3","Type":"ContainerDied","Data":"9e59288207854914f0afb9af8d71594634b183a423870348e113f04bd183ea9c"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.613926 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3c9c32d1-459d-4c35-8cf3-876542a657e9","Type":"ContainerStarted","Data":"53e63780a4349c2b5260d7e34106ae54305f2f823c0e5ce199e4b590190b6988"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.614915 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbef45ff-afce-462a-8835-30339db0f5a0","Type":"ContainerStarted","Data":"eeb3de703bdb8c0a41cdeb567749070a7fb9f4bd304224a83d58347695a0b389"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.616217 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"33b0d963-d13d-4b40-b458-b85ec4f10131","Type":"ContainerStarted","Data":"590c4de9bf3cd81303bdd016ff131111dec51de25978061a30b3674186063d8e"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.617367 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"412ee2a8-6c40-4142-8e09-05f4c22862c0","Type":"ContainerStarted","Data":"1b7c5b5dc7b8ba8807d2d5821bac390e13f7d146b09bfa8a16a0c95d54597d3b"} Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.617391 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g7gmz" Nov 25 12:30:45 crc kubenswrapper[4688]: I1125 12:30:45.617462 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rtz82" Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.218498 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rtz82"] Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.223215 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rtz82"] Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.233232 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7gmz"] Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.238083 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7gmz"] Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.503379 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.628175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e","Type":"ContainerStarted","Data":"291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45"} Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.631345 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nndbf" event={"ID":"d6f000f3-dc04-44a4-b019-d41633753240","Type":"ContainerStarted","Data":"b30829f21e80f16b244e18300898390b85035f2f36f954fc076ea98695ce84e4"} Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.634256 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" event={"ID":"d494f523-bb07-44d5-82fc-6c3c3a8432a3","Type":"ContainerStarted","Data":"1da2ae81d61f44532369d1ecd7639513acc9b2d4ee0bf7c722f42cf32ae77dc4"} Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.634329 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.643472 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbef45ff-afce-462a-8835-30339db0f5a0","Type":"ContainerStarted","Data":"f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89"} Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.647320 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" event={"ID":"b9f11a2a-8da1-43f5-93fa-9d383e1351ef","Type":"ContainerStarted","Data":"eca084a27b027aace4ca1c52172548914ba394a748bd93807506ce7b9b9490d9"} Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.648103 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.713611 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" podStartSLOduration=3.7888754860000002 podStartE2EDuration="19.71358493s" podCreationTimestamp="2025-11-25 12:30:27 +0000 UTC" firstStartedPulling="2025-11-25 12:30:28.71895 +0000 UTC m=+978.828578868" lastFinishedPulling="2025-11-25 12:30:44.643659434 +0000 UTC m=+994.753288312" observedRunningTime="2025-11-25 12:30:46.706405577 +0000 UTC m=+996.816034445" watchObservedRunningTime="2025-11-25 12:30:46.71358493 +0000 UTC m=+996.823213798" Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.729955 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" podStartSLOduration=3.099518677 podStartE2EDuration="18.729935978s" podCreationTimestamp="2025-11-25 12:30:28 +0000 UTC" firstStartedPulling="2025-11-25 12:30:28.929995573 +0000 UTC m=+979.039624431" lastFinishedPulling="2025-11-25 12:30:44.560412864 +0000 UTC m=+994.670041732" observedRunningTime="2025-11-25 12:30:46.724649506 +0000 UTC m=+996.834278404" watchObservedRunningTime="2025-11-25 12:30:46.729935978 +0000 UTC m=+996.839564846" Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.752330 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b70371-6040-43bd-97bb-6c452af0d2dc" path="/var/lib/kubelet/pods/10b70371-6040-43bd-97bb-6c452af0d2dc/volumes" Nov 25 12:30:46 crc kubenswrapper[4688]: I1125 12:30:46.752903 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a719e9-0e80-4db3-9b7f-4cf606cec2b1" path="/var/lib/kubelet/pods/60a719e9-0e80-4db3-9b7f-4cf606cec2b1/volumes" Nov 25 12:30:46 crc kubenswrapper[4688]: W1125 12:30:46.909826 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3644aadc_3c20_41f5_8969_f84b941eef27.slice/crio-5ffa8ba8cb292dc9149d36a7dc2da43d768b57c4568ed6307d115965094a021c WatchSource:0}: Error finding container 5ffa8ba8cb292dc9149d36a7dc2da43d768b57c4568ed6307d115965094a021c: Status 404 returned error can't find the container with id 5ffa8ba8cb292dc9149d36a7dc2da43d768b57c4568ed6307d115965094a021c Nov 25 12:30:47 crc kubenswrapper[4688]: I1125 12:30:47.658245 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3644aadc-3c20-41f5-8969-f84b941eef27","Type":"ContainerStarted","Data":"5ffa8ba8cb292dc9149d36a7dc2da43d768b57c4568ed6307d115965094a021c"} Nov 25 12:30:47 crc kubenswrapper[4688]: I1125 12:30:47.854169 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:30:47 crc kubenswrapper[4688]: I1125 12:30:47.854234 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.176506 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-sfg7j"] Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.180161 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.182696 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.196126 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sfg7j"] Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.277149 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfcc3ad5-018f-4723-bd38-1384baf3d72e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.277213 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfcc3ad5-018f-4723-bd38-1384baf3d72e-config\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.277262 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cfcc3ad5-018f-4723-bd38-1384baf3d72e-ovs-rundir\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.277290 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcc3ad5-018f-4723-bd38-1384baf3d72e-combined-ca-bundle\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.277340 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cfcc3ad5-018f-4723-bd38-1384baf3d72e-ovn-rundir\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.277373 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx5vn\" (UniqueName: \"kubernetes.io/projected/cfcc3ad5-018f-4723-bd38-1384baf3d72e-kube-api-access-mx5vn\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.305725 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-58nv4"] Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.341577 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7lq62"] Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.343391 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.347273 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.361033 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7lq62"] Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.382681 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcc3ad5-018f-4723-bd38-1384baf3d72e-combined-ca-bundle\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.382760 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.382805 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cfcc3ad5-018f-4723-bd38-1384baf3d72e-ovn-rundir\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.382848 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx5vn\" (UniqueName: \"kubernetes.io/projected/cfcc3ad5-018f-4723-bd38-1384baf3d72e-kube-api-access-mx5vn\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.382908 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcjzg\" (UniqueName: \"kubernetes.io/projected/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-kube-api-access-jcjzg\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.382934 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.382977 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-config\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.383018 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfcc3ad5-018f-4723-bd38-1384baf3d72e-config\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.383040 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfcc3ad5-018f-4723-bd38-1384baf3d72e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.383078 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cfcc3ad5-018f-4723-bd38-1384baf3d72e-ovs-rundir\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.383382 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cfcc3ad5-018f-4723-bd38-1384baf3d72e-ovn-rundir\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.383389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cfcc3ad5-018f-4723-bd38-1384baf3d72e-ovs-rundir\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.383991 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfcc3ad5-018f-4723-bd38-1384baf3d72e-config\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.390400 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcc3ad5-018f-4723-bd38-1384baf3d72e-combined-ca-bundle\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.410999 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx5vn\" (UniqueName: \"kubernetes.io/projected/cfcc3ad5-018f-4723-bd38-1384baf3d72e-kube-api-access-mx5vn\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.426623 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfcc3ad5-018f-4723-bd38-1384baf3d72e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sfg7j\" (UID: \"cfcc3ad5-018f-4723-bd38-1384baf3d72e\") " pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.484139 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcjzg\" (UniqueName: \"kubernetes.io/projected/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-kube-api-access-jcjzg\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.484191 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.484265 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-config\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.484329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.485342 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.485510 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.486024 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-config\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.500703 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcjzg\" (UniqueName: \"kubernetes.io/projected/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-kube-api-access-jcjzg\") pod \"dnsmasq-dns-7f896c8c65-7lq62\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.526950 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sfg7j" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.543347 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m6rnn"] Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.543681 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" podUID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" containerName="dnsmasq-dns" containerID="cri-o://1da2ae81d61f44532369d1ecd7639513acc9b2d4ee0bf7c722f42cf32ae77dc4" gracePeriod=10 Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.584290 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vc4v4"] Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.586415 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.590117 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.610963 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vc4v4"] Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.667883 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.682281 4688 generic.go:334] "Generic (PLEG): container finished" podID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" containerID="1da2ae81d61f44532369d1ecd7639513acc9b2d4ee0bf7c722f42cf32ae77dc4" exitCode=0 Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.682344 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" event={"ID":"d494f523-bb07-44d5-82fc-6c3c3a8432a3","Type":"ContainerDied","Data":"1da2ae81d61f44532369d1ecd7639513acc9b2d4ee0bf7c722f42cf32ae77dc4"} Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.682508 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" podUID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" containerName="dnsmasq-dns" containerID="cri-o://eca084a27b027aace4ca1c52172548914ba394a748bd93807506ce7b9b9490d9" gracePeriod=10 Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.688286 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-config\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.688355 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.688408 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.688683 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.688888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wks\" (UniqueName: \"kubernetes.io/projected/2c0e78db-622d-4795-99ee-7e2055d4449e-kube-api-access-h6wks\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.790870 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-config\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.790977 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.791041 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.791077 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.791163 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6wks\" (UniqueName: \"kubernetes.io/projected/2c0e78db-622d-4795-99ee-7e2055d4449e-kube-api-access-h6wks\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.791809 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-config\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.791892 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.792379 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.793583 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.820585 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6wks\" (UniqueName: \"kubernetes.io/projected/2c0e78db-622d-4795-99ee-7e2055d4449e-kube-api-access-h6wks\") pod \"dnsmasq-dns-86db49b7ff-vc4v4\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:49 crc kubenswrapper[4688]: I1125 12:30:49.907346 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:50 crc kubenswrapper[4688]: I1125 12:30:50.700480 4688 generic.go:334] "Generic (PLEG): container finished" podID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" containerID="eca084a27b027aace4ca1c52172548914ba394a748bd93807506ce7b9b9490d9" exitCode=0 Nov 25 12:30:50 crc kubenswrapper[4688]: I1125 12:30:50.700544 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" event={"ID":"b9f11a2a-8da1-43f5-93fa-9d383e1351ef","Type":"ContainerDied","Data":"eca084a27b027aace4ca1c52172548914ba394a748bd93807506ce7b9b9490d9"} Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.766798 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.838604 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g8nv\" (UniqueName: \"kubernetes.io/projected/d494f523-bb07-44d5-82fc-6c3c3a8432a3-kube-api-access-8g8nv\") pod \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.838755 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-config\") pod \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.838864 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-dns-svc\") pod \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\" (UID: \"d494f523-bb07-44d5-82fc-6c3c3a8432a3\") " Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.844022 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d494f523-bb07-44d5-82fc-6c3c3a8432a3-kube-api-access-8g8nv" (OuterVolumeSpecName: "kube-api-access-8g8nv") pod "d494f523-bb07-44d5-82fc-6c3c3a8432a3" (UID: "d494f523-bb07-44d5-82fc-6c3c3a8432a3"). InnerVolumeSpecName "kube-api-access-8g8nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.887479 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d494f523-bb07-44d5-82fc-6c3c3a8432a3" (UID: "d494f523-bb07-44d5-82fc-6c3c3a8432a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.887725 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-config" (OuterVolumeSpecName: "config") pod "d494f523-bb07-44d5-82fc-6c3c3a8432a3" (UID: "d494f523-bb07-44d5-82fc-6c3c3a8432a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.941159 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g8nv\" (UniqueName: \"kubernetes.io/projected/d494f523-bb07-44d5-82fc-6c3c3a8432a3-kube-api-access-8g8nv\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.941207 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:51 crc kubenswrapper[4688]: I1125 12:30:51.941221 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d494f523-bb07-44d5-82fc-6c3c3a8432a3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.577657 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.649814 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-config\") pod \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.649874 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54vpr\" (UniqueName: \"kubernetes.io/projected/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-kube-api-access-54vpr\") pod \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.649949 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-dns-svc\") pod \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\" (UID: \"b9f11a2a-8da1-43f5-93fa-9d383e1351ef\") " Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.653202 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-kube-api-access-54vpr" (OuterVolumeSpecName: "kube-api-access-54vpr") pod "b9f11a2a-8da1-43f5-93fa-9d383e1351ef" (UID: "b9f11a2a-8da1-43f5-93fa-9d383e1351ef"). InnerVolumeSpecName "kube-api-access-54vpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.687560 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-config" (OuterVolumeSpecName: "config") pod "b9f11a2a-8da1-43f5-93fa-9d383e1351ef" (UID: "b9f11a2a-8da1-43f5-93fa-9d383e1351ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.692576 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b9f11a2a-8da1-43f5-93fa-9d383e1351ef" (UID: "b9f11a2a-8da1-43f5-93fa-9d383e1351ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.731740 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.731746 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-m6rnn" event={"ID":"d494f523-bb07-44d5-82fc-6c3c3a8432a3","Type":"ContainerDied","Data":"92d4085bc317a12d4190d4fa964b119c9525accfc42a3356efcdff8748b68e3f"} Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.732196 4688 scope.go:117] "RemoveContainer" containerID="1da2ae81d61f44532369d1ecd7639513acc9b2d4ee0bf7c722f42cf32ae77dc4" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.736211 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" event={"ID":"b9f11a2a-8da1-43f5-93fa-9d383e1351ef","Type":"ContainerDied","Data":"33bee5ffb6c3ddfe79c0f9c8f7b9b5c35f0e6b9b4ac8b5724dd9d5d31a29f9c6"} Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.736379 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-58nv4" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.753375 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.753414 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54vpr\" (UniqueName: \"kubernetes.io/projected/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-kube-api-access-54vpr\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.753427 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9f11a2a-8da1-43f5-93fa-9d383e1351ef-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.818064 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m6rnn"] Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.824572 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-m6rnn"] Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.853725 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-58nv4"] Nov 25 12:30:52 crc kubenswrapper[4688]: I1125 12:30:52.881413 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-58nv4"] Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.128401 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7lq62"] Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.286124 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sfg7j"] Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.366398 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vc4v4"] Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.420119 4688 scope.go:117] "RemoveContainer" containerID="9e59288207854914f0afb9af8d71594634b183a423870348e113f04bd183ea9c" Nov 25 12:30:53 crc kubenswrapper[4688]: W1125 12:30:53.427676 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf03b68b7_0432_4cc7_bd2d_01a4fc3436a0.slice/crio-5a22950d2fdfe4a15c0bd9e5cab8e3a47faac4014476539cbe73d11a89e31234 WatchSource:0}: Error finding container 5a22950d2fdfe4a15c0bd9e5cab8e3a47faac4014476539cbe73d11a89e31234: Status 404 returned error can't find the container with id 5a22950d2fdfe4a15c0bd9e5cab8e3a47faac4014476539cbe73d11a89e31234 Nov 25 12:30:53 crc kubenswrapper[4688]: W1125 12:30:53.430503 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c0e78db_622d_4795_99ee_7e2055d4449e.slice/crio-4881203707bd0409008e4793f2afd7426265332e863c06e58e0426df80333f1a WatchSource:0}: Error finding container 4881203707bd0409008e4793f2afd7426265332e863c06e58e0426df80333f1a: Status 404 returned error can't find the container with id 4881203707bd0409008e4793f2afd7426265332e863c06e58e0426df80333f1a Nov 25 12:30:53 crc kubenswrapper[4688]: W1125 12:30:53.431644 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfcc3ad5_018f_4723_bd38_1384baf3d72e.slice/crio-98e68be1d0901b12e8d82c747841f844262788eb0911b81d3c20841a3fd3b01a WatchSource:0}: Error finding container 98e68be1d0901b12e8d82c747841f844262788eb0911b81d3c20841a3fd3b01a: Status 404 returned error can't find the container with id 98e68be1d0901b12e8d82c747841f844262788eb0911b81d3c20841a3fd3b01a Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.497983 4688 scope.go:117] "RemoveContainer" containerID="eca084a27b027aace4ca1c52172548914ba394a748bd93807506ce7b9b9490d9" Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.539119 4688 scope.go:117] "RemoveContainer" containerID="e6d03bd6f7034a9cdd8686de54bb8aeea11819828b1d43a57223fcc543e23538" Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.747457 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"412ee2a8-6c40-4142-8e09-05f4c22862c0","Type":"ContainerStarted","Data":"b2dc3c3e06034ba1ca8821ad57cd03c0b486bd4aa931222853e4984264718026"} Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.748369 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.750691 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" event={"ID":"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0","Type":"ContainerStarted","Data":"5a22950d2fdfe4a15c0bd9e5cab8e3a47faac4014476539cbe73d11a89e31234"} Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.756234 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" event={"ID":"2c0e78db-622d-4795-99ee-7e2055d4449e","Type":"ContainerStarted","Data":"4881203707bd0409008e4793f2afd7426265332e863c06e58e0426df80333f1a"} Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.763133 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"666dbc1a-fbdf-4ff1-b949-926ea3e70472","Type":"ContainerStarted","Data":"01e949e54cb9a9e330e623254cdef105246e06fd10a00617ae3be6f64cdbd4a9"} Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.771586 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.446124487 podStartE2EDuration="21.771570115s" podCreationTimestamp="2025-11-25 12:30:32 +0000 UTC" firstStartedPulling="2025-11-25 12:30:45.084937872 +0000 UTC m=+995.194566740" lastFinishedPulling="2025-11-25 12:30:52.4103835 +0000 UTC m=+1002.520012368" observedRunningTime="2025-11-25 12:30:53.765267216 +0000 UTC m=+1003.874896084" watchObservedRunningTime="2025-11-25 12:30:53.771570115 +0000 UTC m=+1003.881198983" Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.774823 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sfg7j" event={"ID":"cfcc3ad5-018f-4723-bd38-1384baf3d72e","Type":"ContainerStarted","Data":"98e68be1d0901b12e8d82c747841f844262788eb0911b81d3c20841a3fd3b01a"} Nov 25 12:30:53 crc kubenswrapper[4688]: I1125 12:30:53.780348 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3c9c32d1-459d-4c35-8cf3-876542a657e9","Type":"ContainerStarted","Data":"294ed083109b053a11dfae25b8eaeceab700ac19735cb2df06a56bc953d8a288"} Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.750014 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" path="/var/lib/kubelet/pods/b9f11a2a-8da1-43f5-93fa-9d383e1351ef/volumes" Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.752351 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" path="/var/lib/kubelet/pods/d494f523-bb07-44d5-82fc-6c3c3a8432a3/volumes" Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.789906 4688 generic.go:334] "Generic (PLEG): container finished" podID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerID="b8ee7c713f4ce31d4ee67dd9a54a2798a4fec9274475ce54c8df34b51b92adbe" exitCode=0 Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.789992 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" event={"ID":"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0","Type":"ContainerDied","Data":"b8ee7c713f4ce31d4ee67dd9a54a2798a4fec9274475ce54c8df34b51b92adbe"} Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.792720 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3644aadc-3c20-41f5-8969-f84b941eef27","Type":"ContainerStarted","Data":"cf4bd3749724ae480dee71233f559afc0e9f291fa82db8e014f2704fb54a445a"} Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.794810 4688 generic.go:334] "Generic (PLEG): container finished" podID="2c0e78db-622d-4795-99ee-7e2055d4449e" containerID="ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47" exitCode=0 Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.794922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" event={"ID":"2c0e78db-622d-4795-99ee-7e2055d4449e","Type":"ContainerDied","Data":"ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47"} Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.806356 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"91c71377-dafd-4693-9408-7c0ec206490e","Type":"ContainerStarted","Data":"98147f8c378f701ff4e9b00d742f4d8838d4e977d586d1429ea156e6450e1895"} Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.806555 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.809313 4688 generic.go:334] "Generic (PLEG): container finished" podID="d6f000f3-dc04-44a4-b019-d41633753240" containerID="e352f985df11149d81abae60d057bf391adc45f4773406b0de5643b189b81c91" exitCode=0 Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.809573 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nndbf" event={"ID":"d6f000f3-dc04-44a4-b019-d41633753240","Type":"ContainerDied","Data":"e352f985df11149d81abae60d057bf391adc45f4773406b0de5643b189b81c91"} Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.813983 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-92hwc" event={"ID":"4ac126ff-ac63-4d6a-b201-e6dbd8ba3153","Type":"ContainerStarted","Data":"be4049ec6027d1d2cd8634943a147879f03bf3ab505237cedaa9c1c5b77beaa4"} Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.814022 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-92hwc" Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.830504 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"33b0d963-d13d-4b40-b458-b85ec4f10131","Type":"ContainerStarted","Data":"5ef4fe9d9fdb49d05aefec8c689d39761b0bda7b2462caa6d5c31085236234fa"} Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.853759 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.50412194 podStartE2EDuration="20.853738477s" podCreationTimestamp="2025-11-25 12:30:34 +0000 UTC" firstStartedPulling="2025-11-25 12:30:45.287889167 +0000 UTC m=+995.397518025" lastFinishedPulling="2025-11-25 12:30:53.637505704 +0000 UTC m=+1003.747134562" observedRunningTime="2025-11-25 12:30:54.84114528 +0000 UTC m=+1004.950774148" watchObservedRunningTime="2025-11-25 12:30:54.853738477 +0000 UTC m=+1004.963367345" Nov 25 12:30:54 crc kubenswrapper[4688]: I1125 12:30:54.898926 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-92hwc" podStartSLOduration=10.330695327 podStartE2EDuration="17.898908657s" podCreationTimestamp="2025-11-25 12:30:37 +0000 UTC" firstStartedPulling="2025-11-25 12:30:45.290390114 +0000 UTC m=+995.400018982" lastFinishedPulling="2025-11-25 12:30:52.858603444 +0000 UTC m=+1002.968232312" observedRunningTime="2025-11-25 12:30:54.893335538 +0000 UTC m=+1005.002964406" watchObservedRunningTime="2025-11-25 12:30:54.898908657 +0000 UTC m=+1005.008537525" Nov 25 12:30:55 crc kubenswrapper[4688]: I1125 12:30:55.839137 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nndbf" event={"ID":"d6f000f3-dc04-44a4-b019-d41633753240","Type":"ContainerStarted","Data":"7a2caff1d3c3e4ccc2630656579180ac94a532be57d388e752b52e9d2c260a0b"} Nov 25 12:30:55 crc kubenswrapper[4688]: I1125 12:30:55.842006 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" event={"ID":"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0","Type":"ContainerStarted","Data":"52098d23d336a66c1e259fdf2a5d400d6738e7b990334a198ddc3230b22a5cdb"} Nov 25 12:30:55 crc kubenswrapper[4688]: I1125 12:30:55.842599 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:30:55 crc kubenswrapper[4688]: I1125 12:30:55.845789 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" event={"ID":"2c0e78db-622d-4795-99ee-7e2055d4449e","Type":"ContainerStarted","Data":"8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a"} Nov 25 12:30:55 crc kubenswrapper[4688]: I1125 12:30:55.846295 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:30:55 crc kubenswrapper[4688]: I1125 12:30:55.860760 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" podStartSLOduration=6.860740647 podStartE2EDuration="6.860740647s" podCreationTimestamp="2025-11-25 12:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:30:55.858160227 +0000 UTC m=+1005.967789095" watchObservedRunningTime="2025-11-25 12:30:55.860740647 +0000 UTC m=+1005.970369505" Nov 25 12:30:55 crc kubenswrapper[4688]: I1125 12:30:55.885319 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" podStartSLOduration=6.885299745 podStartE2EDuration="6.885299745s" podCreationTimestamp="2025-11-25 12:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:30:55.883167007 +0000 UTC m=+1005.992795895" watchObservedRunningTime="2025-11-25 12:30:55.885299745 +0000 UTC m=+1005.994928613" Nov 25 12:31:00 crc kubenswrapper[4688]: I1125 12:31:00.884993 4688 generic.go:334] "Generic (PLEG): container finished" podID="666dbc1a-fbdf-4ff1-b949-926ea3e70472" containerID="01e949e54cb9a9e330e623254cdef105246e06fd10a00617ae3be6f64cdbd4a9" exitCode=0 Nov 25 12:31:00 crc kubenswrapper[4688]: I1125 12:31:00.885064 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"666dbc1a-fbdf-4ff1-b949-926ea3e70472","Type":"ContainerDied","Data":"01e949e54cb9a9e330e623254cdef105246e06fd10a00617ae3be6f64cdbd4a9"} Nov 25 12:31:01 crc kubenswrapper[4688]: I1125 12:31:01.898658 4688 generic.go:334] "Generic (PLEG): container finished" podID="3c9c32d1-459d-4c35-8cf3-876542a657e9" containerID="294ed083109b053a11dfae25b8eaeceab700ac19735cb2df06a56bc953d8a288" exitCode=0 Nov 25 12:31:01 crc kubenswrapper[4688]: I1125 12:31:01.898738 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3c9c32d1-459d-4c35-8cf3-876542a657e9","Type":"ContainerDied","Data":"294ed083109b053a11dfae25b8eaeceab700ac19735cb2df06a56bc953d8a288"} Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.841023 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.914714 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"666dbc1a-fbdf-4ff1-b949-926ea3e70472","Type":"ContainerStarted","Data":"f61c58d0a59fe99832d814f1a43fd78ab80b8daa03d22407df54d3af2d02d133"} Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.918097 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nndbf" event={"ID":"d6f000f3-dc04-44a4-b019-d41633753240","Type":"ContainerStarted","Data":"23bbc521d15540db85210e54ac384507551695d093405029f49d42efed8b069a"} Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.918168 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.918203 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.922105 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3c9c32d1-459d-4c35-8cf3-876542a657e9","Type":"ContainerStarted","Data":"bee5871052538f2c1b7c901648f418de7b34285f08c0e320d812e01fd69d629e"} Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.945108 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.842308224 podStartE2EDuration="31.945080598s" podCreationTimestamp="2025-11-25 12:30:31 +0000 UTC" firstStartedPulling="2025-11-25 12:30:45.307671797 +0000 UTC m=+995.417300665" lastFinishedPulling="2025-11-25 12:30:52.410444171 +0000 UTC m=+1002.520073039" observedRunningTime="2025-11-25 12:31:02.941140362 +0000 UTC m=+1013.050769230" watchObservedRunningTime="2025-11-25 12:31:02.945080598 +0000 UTC m=+1013.054709466" Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.966319 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.217013397 podStartE2EDuration="33.966301696s" podCreationTimestamp="2025-11-25 12:30:29 +0000 UTC" firstStartedPulling="2025-11-25 12:30:45.095946097 +0000 UTC m=+995.205574965" lastFinishedPulling="2025-11-25 12:30:52.845234386 +0000 UTC m=+1002.954863264" observedRunningTime="2025-11-25 12:31:02.962513284 +0000 UTC m=+1013.072142152" watchObservedRunningTime="2025-11-25 12:31:02.966301696 +0000 UTC m=+1013.075930564" Nov 25 12:31:02 crc kubenswrapper[4688]: I1125 12:31:02.989131 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-nndbf" podStartSLOduration=19.039902994 podStartE2EDuration="25.989108757s" podCreationTimestamp="2025-11-25 12:30:37 +0000 UTC" firstStartedPulling="2025-11-25 12:30:45.656667403 +0000 UTC m=+995.766296271" lastFinishedPulling="2025-11-25 12:30:52.605873146 +0000 UTC m=+1002.715502034" observedRunningTime="2025-11-25 12:31:02.982889281 +0000 UTC m=+1013.092518149" watchObservedRunningTime="2025-11-25 12:31:02.989108757 +0000 UTC m=+1013.098737625" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.578335 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.588185 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7lq62"] Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.588479 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" podUID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerName="dnsmasq-dns" containerID="cri-o://52098d23d336a66c1e259fdf2a5d400d6738e7b990334a198ddc3230b22a5cdb" gracePeriod=10 Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.589667 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.647024 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-wdtng"] Nov 25 12:31:04 crc kubenswrapper[4688]: E1125 12:31:04.647334 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" containerName="dnsmasq-dns" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.647350 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" containerName="dnsmasq-dns" Nov 25 12:31:04 crc kubenswrapper[4688]: E1125 12:31:04.647361 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" containerName="init" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.647366 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" containerName="init" Nov 25 12:31:04 crc kubenswrapper[4688]: E1125 12:31:04.647385 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" containerName="init" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.647391 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" containerName="init" Nov 25 12:31:04 crc kubenswrapper[4688]: E1125 12:31:04.647403 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" containerName="dnsmasq-dns" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.647409 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" containerName="dnsmasq-dns" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.647587 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f11a2a-8da1-43f5-93fa-9d383e1351ef" containerName="dnsmasq-dns" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.647604 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d494f523-bb07-44d5-82fc-6c3c3a8432a3" containerName="dnsmasq-dns" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.648360 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.669138 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" podUID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.686939 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wdtng"] Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.768307 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwnx5\" (UniqueName: \"kubernetes.io/projected/168ae79c-b5b7-41f7-9443-96af2e8ad91c-kube-api-access-jwnx5\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.768369 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-dns-svc\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.768413 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.768443 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.768481 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-config\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.869868 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-config\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.870325 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwnx5\" (UniqueName: \"kubernetes.io/projected/168ae79c-b5b7-41f7-9443-96af2e8ad91c-kube-api-access-jwnx5\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.870384 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-dns-svc\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.870480 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.870555 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.870950 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-config\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.871549 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.871671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-dns-svc\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.872332 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.893174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwnx5\" (UniqueName: \"kubernetes.io/projected/168ae79c-b5b7-41f7-9443-96af2e8ad91c-kube-api-access-jwnx5\") pod \"dnsmasq-dns-698758b865-wdtng\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.910224 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.945783 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3644aadc-3c20-41f5-8969-f84b941eef27","Type":"ContainerStarted","Data":"c2b18077b9d1632ae3d51141042e7f242a6dc99e4883a6de56bb152477024d3b"} Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.960889 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"33b0d963-d13d-4b40-b458-b85ec4f10131","Type":"ContainerStarted","Data":"df2c1cd52ca48e4347b1a4b1a203c176a00f72378be1fb2b988826519bfc47a1"} Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.967877 4688 generic.go:334] "Generic (PLEG): container finished" podID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerID="52098d23d336a66c1e259fdf2a5d400d6738e7b990334a198ddc3230b22a5cdb" exitCode=0 Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.967948 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" event={"ID":"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0","Type":"ContainerDied","Data":"52098d23d336a66c1e259fdf2a5d400d6738e7b990334a198ddc3230b22a5cdb"} Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.982174 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.996981881 podStartE2EDuration="23.982153284s" podCreationTimestamp="2025-11-25 12:30:41 +0000 UTC" firstStartedPulling="2025-11-25 12:30:46.912121197 +0000 UTC m=+997.021750055" lastFinishedPulling="2025-11-25 12:31:03.89729259 +0000 UTC m=+1014.006921458" observedRunningTime="2025-11-25 12:31:04.97864636 +0000 UTC m=+1015.088275238" watchObservedRunningTime="2025-11-25 12:31:04.982153284 +0000 UTC m=+1015.091782152" Nov 25 12:31:04 crc kubenswrapper[4688]: I1125 12:31:04.983458 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sfg7j" event={"ID":"cfcc3ad5-018f-4723-bd38-1384baf3d72e","Type":"ContainerStarted","Data":"f91b9cbcac0e2e4ddd3f8d6da61e3a037fe255ba92b90c109df9578885e579eb"} Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.015276 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.442831069 podStartE2EDuration="27.015257771s" podCreationTimestamp="2025-11-25 12:30:38 +0000 UTC" firstStartedPulling="2025-11-25 12:30:45.452729152 +0000 UTC m=+995.562358020" lastFinishedPulling="2025-11-25 12:31:04.025155854 +0000 UTC m=+1014.134784722" observedRunningTime="2025-11-25 12:31:05.007891023 +0000 UTC m=+1015.117519891" watchObservedRunningTime="2025-11-25 12:31:05.015257771 +0000 UTC m=+1015.124886639" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.018842 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.040608 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-sfg7j" podStartSLOduration=5.596425967 podStartE2EDuration="16.04058713s" podCreationTimestamp="2025-11-25 12:30:49 +0000 UTC" firstStartedPulling="2025-11-25 12:30:53.437974201 +0000 UTC m=+1003.547603069" lastFinishedPulling="2025-11-25 12:31:03.882135364 +0000 UTC m=+1013.991764232" observedRunningTime="2025-11-25 12:31:05.030433957 +0000 UTC m=+1015.140062825" watchObservedRunningTime="2025-11-25 12:31:05.04058713 +0000 UTC m=+1015.150216008" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.152380 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.285564 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcjzg\" (UniqueName: \"kubernetes.io/projected/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-kube-api-access-jcjzg\") pod \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.285651 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-ovsdbserver-sb\") pod \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.285830 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-dns-svc\") pod \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.285860 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-config\") pod \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\" (UID: \"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0\") " Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.292825 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-kube-api-access-jcjzg" (OuterVolumeSpecName: "kube-api-access-jcjzg") pod "f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" (UID: "f03b68b7-0432-4cc7-bd2d-01a4fc3436a0"). InnerVolumeSpecName "kube-api-access-jcjzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.331371 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-config" (OuterVolumeSpecName: "config") pod "f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" (UID: "f03b68b7-0432-4cc7-bd2d-01a4fc3436a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.332681 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" (UID: "f03b68b7-0432-4cc7-bd2d-01a4fc3436a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.335908 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" (UID: "f03b68b7-0432-4cc7-bd2d-01a4fc3436a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.387352 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.387395 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.387408 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcjzg\" (UniqueName: \"kubernetes.io/projected/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-kube-api-access-jcjzg\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.387422 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.524577 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wdtng"] Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.781614 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 25 12:31:05 crc kubenswrapper[4688]: E1125 12:31:05.782293 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerName="init" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.782312 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerName="init" Nov 25 12:31:05 crc kubenswrapper[4688]: E1125 12:31:05.782331 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerName="dnsmasq-dns" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.782337 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerName="dnsmasq-dns" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.782549 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" containerName="dnsmasq-dns" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.787100 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.789455 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.790080 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2m9fw" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.790097 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.791549 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.803231 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.896932 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.897006 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/000479f0-0b04-4867-989b-622c2e951f4b-lock\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.897106 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/000479f0-0b04-4867-989b-622c2e951f4b-cache\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.897135 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.897200 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqkdc\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-kube-api-access-fqkdc\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.997985 4688 generic.go:334] "Generic (PLEG): container finished" podID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerID="6134679f17358d989e6121715effedf3d8d07a30aca8318f74f4d6ef1a5dec14" exitCode=0 Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998058 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wdtng" event={"ID":"168ae79c-b5b7-41f7-9443-96af2e8ad91c","Type":"ContainerDied","Data":"6134679f17358d989e6121715effedf3d8d07a30aca8318f74f4d6ef1a5dec14"} Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998083 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wdtng" event={"ID":"168ae79c-b5b7-41f7-9443-96af2e8ad91c","Type":"ContainerStarted","Data":"b51023158b26afb07f66de62cb5301e74d5e7db65d5cb48ffcb71d37e9fd6e61"} Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998194 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/000479f0-0b04-4867-989b-622c2e951f4b-cache\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998229 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998266 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqkdc\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-kube-api-access-fqkdc\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998323 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998368 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/000479f0-0b04-4867-989b-622c2e951f4b-lock\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: E1125 12:31:05.998615 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 12:31:05 crc kubenswrapper[4688]: E1125 12:31:05.998641 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998643 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: E1125 12:31:05.998690 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift podName:000479f0-0b04-4867-989b-622c2e951f4b nodeName:}" failed. No retries permitted until 2025-11-25 12:31:06.498671129 +0000 UTC m=+1016.608300007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift") pod "swift-storage-0" (UID: "000479f0-0b04-4867-989b-622c2e951f4b") : configmap "swift-ring-files" not found Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998713 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/000479f0-0b04-4867-989b-622c2e951f4b-cache\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:05 crc kubenswrapper[4688]: I1125 12:31:05.998768 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/000479f0-0b04-4867-989b-622c2e951f4b-lock\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.001094 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" event={"ID":"f03b68b7-0432-4cc7-bd2d-01a4fc3436a0","Type":"ContainerDied","Data":"5a22950d2fdfe4a15c0bd9e5cab8e3a47faac4014476539cbe73d11a89e31234"} Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.001140 4688 scope.go:117] "RemoveContainer" containerID="52098d23d336a66c1e259fdf2a5d400d6738e7b990334a198ddc3230b22a5cdb" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.001249 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7lq62" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.022950 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqkdc\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-kube-api-access-fqkdc\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.031769 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.138922 4688 scope.go:117] "RemoveContainer" containerID="b8ee7c713f4ce31d4ee67dd9a54a2798a4fec9274475ce54c8df34b51b92adbe" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.169824 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7lq62"] Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.175221 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7lq62"] Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.507052 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:06 crc kubenswrapper[4688]: E1125 12:31:06.507252 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 12:31:06 crc kubenswrapper[4688]: E1125 12:31:06.507275 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 12:31:06 crc kubenswrapper[4688]: E1125 12:31:06.507334 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift podName:000479f0-0b04-4867-989b-622c2e951f4b nodeName:}" failed. No retries permitted until 2025-11-25 12:31:07.507316391 +0000 UTC m=+1017.616945269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift") pod "swift-storage-0" (UID: "000479f0-0b04-4867-989b-622c2e951f4b") : configmap "swift-ring-files" not found Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.651712 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.693078 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.749600 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03b68b7-0432-4cc7-bd2d-01a4fc3436a0" path="/var/lib/kubelet/pods/f03b68b7-0432-4cc7-bd2d-01a4fc3436a0/volumes" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.829633 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 12:31:06 crc kubenswrapper[4688]: I1125 12:31:06.869122 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.009751 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wdtng" event={"ID":"168ae79c-b5b7-41f7-9443-96af2e8ad91c","Type":"ContainerStarted","Data":"d0030fd626cdb47185577da1466f388c66501f8de7eec3ff64bf3e2e7f22f489"} Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.009986 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.011092 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.011849 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.035612 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-wdtng" podStartSLOduration=3.03559232 podStartE2EDuration="3.03559232s" podCreationTimestamp="2025-11-25 12:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:31:07.028796038 +0000 UTC m=+1017.138424906" watchObservedRunningTime="2025-11-25 12:31:07.03559232 +0000 UTC m=+1017.145221188" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.052749 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.061735 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.341221 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.342868 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.346633 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.346634 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.346818 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-c2p2v" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.346869 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.355577 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.421366 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l846\" (UniqueName: \"kubernetes.io/projected/0dd00154-420b-4be7-84de-ea971d680ff3-kube-api-access-4l846\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.421417 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.421501 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0dd00154-420b-4be7-84de-ea971d680ff3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.421615 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd00154-420b-4be7-84de-ea971d680ff3-config\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.421656 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.421688 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd00154-420b-4be7-84de-ea971d680ff3-scripts\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.421716 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.523362 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.523485 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0dd00154-420b-4be7-84de-ea971d680ff3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.523545 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd00154-420b-4be7-84de-ea971d680ff3-config\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.523579 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.523605 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd00154-420b-4be7-84de-ea971d680ff3-scripts\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.523630 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.523662 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.523700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l846\" (UniqueName: \"kubernetes.io/projected/0dd00154-420b-4be7-84de-ea971d680ff3-kube-api-access-4l846\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: E1125 12:31:07.523851 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 12:31:07 crc kubenswrapper[4688]: E1125 12:31:07.523882 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 12:31:07 crc kubenswrapper[4688]: E1125 12:31:07.523937 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift podName:000479f0-0b04-4867-989b-622c2e951f4b nodeName:}" failed. No retries permitted until 2025-11-25 12:31:09.523919707 +0000 UTC m=+1019.633548585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift") pod "swift-storage-0" (UID: "000479f0-0b04-4867-989b-622c2e951f4b") : configmap "swift-ring-files" not found Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.524097 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0dd00154-420b-4be7-84de-ea971d680ff3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.524602 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd00154-420b-4be7-84de-ea971d680ff3-scripts\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.524656 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd00154-420b-4be7-84de-ea971d680ff3-config\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.530305 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.530307 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.530627 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd00154-420b-4be7-84de-ea971d680ff3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.542282 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l846\" (UniqueName: \"kubernetes.io/projected/0dd00154-420b-4be7-84de-ea971d680ff3-kube-api-access-4l846\") pod \"ovn-northd-0\" (UID: \"0dd00154-420b-4be7-84de-ea971d680ff3\") " pod="openstack/ovn-northd-0" Nov 25 12:31:07 crc kubenswrapper[4688]: I1125 12:31:07.661013 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 12:31:08 crc kubenswrapper[4688]: I1125 12:31:08.075543 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.024943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0dd00154-420b-4be7-84de-ea971d680ff3","Type":"ContainerStarted","Data":"90128c7744a6a601091b45272a7f395f9d11582ab39b2853783616641c98b145"} Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.563654 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:09 crc kubenswrapper[4688]: E1125 12:31:09.563833 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 12:31:09 crc kubenswrapper[4688]: E1125 12:31:09.564016 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 12:31:09 crc kubenswrapper[4688]: E1125 12:31:09.564072 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift podName:000479f0-0b04-4867-989b-622c2e951f4b nodeName:}" failed. No retries permitted until 2025-11-25 12:31:13.564052736 +0000 UTC m=+1023.673681604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift") pod "swift-storage-0" (UID: "000479f0-0b04-4867-989b-622c2e951f4b") : configmap "swift-ring-files" not found Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.638299 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-brb2n"] Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.639446 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.641244 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.641289 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.642311 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.652826 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-brb2n"] Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.665498 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-combined-ca-bundle\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.665632 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a917ea03-c867-4449-a317-2ed904672efa-etc-swift\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.665667 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-dispersionconf\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.665696 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-swiftconf\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.665713 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-ring-data-devices\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.665771 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrcw\" (UniqueName: \"kubernetes.io/projected/a917ea03-c867-4449-a317-2ed904672efa-kube-api-access-zqrcw\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.665829 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-scripts\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.767831 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-combined-ca-bundle\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.767890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a917ea03-c867-4449-a317-2ed904672efa-etc-swift\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.767938 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-dispersionconf\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.767969 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-swiftconf\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.767994 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-ring-data-devices\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.768030 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqrcw\" (UniqueName: \"kubernetes.io/projected/a917ea03-c867-4449-a317-2ed904672efa-kube-api-access-zqrcw\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.768118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-scripts\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.768515 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a917ea03-c867-4449-a317-2ed904672efa-etc-swift\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.769304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-scripts\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.769304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-ring-data-devices\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.771993 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-dispersionconf\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.772288 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-swiftconf\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.772366 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-combined-ca-bundle\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.784103 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqrcw\" (UniqueName: \"kubernetes.io/projected/a917ea03-c867-4449-a317-2ed904672efa-kube-api-access-zqrcw\") pod \"swift-ring-rebalance-brb2n\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:09 crc kubenswrapper[4688]: I1125 12:31:09.953961 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:10 crc kubenswrapper[4688]: I1125 12:31:10.058873 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0dd00154-420b-4be7-84de-ea971d680ff3","Type":"ContainerStarted","Data":"88b52fa96bac226a7792353bffec727c24d454774aba20ecce32bc277c6feea7"} Nov 25 12:31:10 crc kubenswrapper[4688]: I1125 12:31:10.059460 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0dd00154-420b-4be7-84de-ea971d680ff3","Type":"ContainerStarted","Data":"69df2b77373ffdf10085366e5af207ed866e72f5907d7cb9d9ebe2c2ab874408"} Nov 25 12:31:10 crc kubenswrapper[4688]: I1125 12:31:10.059510 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 12:31:10 crc kubenswrapper[4688]: I1125 12:31:10.081070 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.263138138 podStartE2EDuration="3.081050323s" podCreationTimestamp="2025-11-25 12:31:07 +0000 UTC" firstStartedPulling="2025-11-25 12:31:08.083237918 +0000 UTC m=+1018.192866786" lastFinishedPulling="2025-11-25 12:31:08.901150103 +0000 UTC m=+1019.010778971" observedRunningTime="2025-11-25 12:31:10.078902275 +0000 UTC m=+1020.188531143" watchObservedRunningTime="2025-11-25 12:31:10.081050323 +0000 UTC m=+1020.190679191" Nov 25 12:31:10 crc kubenswrapper[4688]: I1125 12:31:10.430966 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-brb2n"] Nov 25 12:31:10 crc kubenswrapper[4688]: W1125 12:31:10.432919 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda917ea03_c867_4449_a317_2ed904672efa.slice/crio-d44ddf30159416ac9b985572d19d3ff1e2615e9fe7eee3a531f60bdc68f668d5 WatchSource:0}: Error finding container d44ddf30159416ac9b985572d19d3ff1e2615e9fe7eee3a531f60bdc68f668d5: Status 404 returned error can't find the container with id d44ddf30159416ac9b985572d19d3ff1e2615e9fe7eee3a531f60bdc68f668d5 Nov 25 12:31:11 crc kubenswrapper[4688]: I1125 12:31:11.069673 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-brb2n" event={"ID":"a917ea03-c867-4449-a317-2ed904672efa","Type":"ContainerStarted","Data":"d44ddf30159416ac9b985572d19d3ff1e2615e9fe7eee3a531f60bdc68f668d5"} Nov 25 12:31:11 crc kubenswrapper[4688]: I1125 12:31:11.133655 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 12:31:11 crc kubenswrapper[4688]: I1125 12:31:11.133738 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 12:31:11 crc kubenswrapper[4688]: I1125 12:31:11.215054 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.151602 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.379177 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-53c3-account-create-b6rlg"] Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.380334 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.382056 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.387246 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-53c3-account-create-b6rlg"] Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.449791 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8fq9b"] Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.451185 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.456000 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8fq9b"] Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.528677 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4495873-670c-487e-9989-b6c65bbd0c04-operator-scripts\") pod \"keystone-53c3-account-create-b6rlg\" (UID: \"b4495873-670c-487e-9989-b6c65bbd0c04\") " pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.528772 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf69f\" (UniqueName: \"kubernetes.io/projected/b4495873-670c-487e-9989-b6c65bbd0c04-kube-api-access-tf69f\") pod \"keystone-53c3-account-create-b6rlg\" (UID: \"b4495873-670c-487e-9989-b6c65bbd0c04\") " pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.563945 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.563987 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.630411 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa319425-2bf3-486e-b80c-52d377a48462-operator-scripts\") pod \"keystone-db-create-8fq9b\" (UID: \"fa319425-2bf3-486e-b80c-52d377a48462\") " pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.631727 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdl8s\" (UniqueName: \"kubernetes.io/projected/fa319425-2bf3-486e-b80c-52d377a48462-kube-api-access-kdl8s\") pod \"keystone-db-create-8fq9b\" (UID: \"fa319425-2bf3-486e-b80c-52d377a48462\") " pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.631777 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4495873-670c-487e-9989-b6c65bbd0c04-operator-scripts\") pod \"keystone-53c3-account-create-b6rlg\" (UID: \"b4495873-670c-487e-9989-b6c65bbd0c04\") " pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.631834 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf69f\" (UniqueName: \"kubernetes.io/projected/b4495873-670c-487e-9989-b6c65bbd0c04-kube-api-access-tf69f\") pod \"keystone-53c3-account-create-b6rlg\" (UID: \"b4495873-670c-487e-9989-b6c65bbd0c04\") " pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.632880 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4495873-670c-487e-9989-b6c65bbd0c04-operator-scripts\") pod \"keystone-53c3-account-create-b6rlg\" (UID: \"b4495873-670c-487e-9989-b6c65bbd0c04\") " pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.649580 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.651189 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf69f\" (UniqueName: \"kubernetes.io/projected/b4495873-670c-487e-9989-b6c65bbd0c04-kube-api-access-tf69f\") pod \"keystone-53c3-account-create-b6rlg\" (UID: \"b4495873-670c-487e-9989-b6c65bbd0c04\") " pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.667339 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vglmq"] Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.669377 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vglmq" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.677016 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vglmq"] Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.705539 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.733062 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa319425-2bf3-486e-b80c-52d377a48462-operator-scripts\") pod \"keystone-db-create-8fq9b\" (UID: \"fa319425-2bf3-486e-b80c-52d377a48462\") " pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.734016 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa319425-2bf3-486e-b80c-52d377a48462-operator-scripts\") pod \"keystone-db-create-8fq9b\" (UID: \"fa319425-2bf3-486e-b80c-52d377a48462\") " pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.736686 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdl8s\" (UniqueName: \"kubernetes.io/projected/fa319425-2bf3-486e-b80c-52d377a48462-kube-api-access-kdl8s\") pod \"keystone-db-create-8fq9b\" (UID: \"fa319425-2bf3-486e-b80c-52d377a48462\") " pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.752806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdl8s\" (UniqueName: \"kubernetes.io/projected/fa319425-2bf3-486e-b80c-52d377a48462-kube-api-access-kdl8s\") pod \"keystone-db-create-8fq9b\" (UID: \"fa319425-2bf3-486e-b80c-52d377a48462\") " pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.771569 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.787620 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-76cf-account-create-2z6wp"] Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.789015 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.792644 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.797051 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76cf-account-create-2z6wp"] Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.838087 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9087e75c-690f-4f83-b63a-af99c771cd3a-operator-scripts\") pod \"placement-db-create-vglmq\" (UID: \"9087e75c-690f-4f83-b63a-af99c771cd3a\") " pod="openstack/placement-db-create-vglmq" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.838137 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8ds\" (UniqueName: \"kubernetes.io/projected/7b487b4e-5b03-439f-80f9-53c3e37121c8-kube-api-access-nr8ds\") pod \"placement-76cf-account-create-2z6wp\" (UID: \"7b487b4e-5b03-439f-80f9-53c3e37121c8\") " pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.838222 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b487b4e-5b03-439f-80f9-53c3e37121c8-operator-scripts\") pod \"placement-76cf-account-create-2z6wp\" (UID: \"7b487b4e-5b03-439f-80f9-53c3e37121c8\") " pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.838293 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrzz\" (UniqueName: \"kubernetes.io/projected/9087e75c-690f-4f83-b63a-af99c771cd3a-kube-api-access-bdrzz\") pod \"placement-db-create-vglmq\" (UID: \"9087e75c-690f-4f83-b63a-af99c771cd3a\") " pod="openstack/placement-db-create-vglmq" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.940261 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdrzz\" (UniqueName: \"kubernetes.io/projected/9087e75c-690f-4f83-b63a-af99c771cd3a-kube-api-access-bdrzz\") pod \"placement-db-create-vglmq\" (UID: \"9087e75c-690f-4f83-b63a-af99c771cd3a\") " pod="openstack/placement-db-create-vglmq" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.940506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9087e75c-690f-4f83-b63a-af99c771cd3a-operator-scripts\") pod \"placement-db-create-vglmq\" (UID: \"9087e75c-690f-4f83-b63a-af99c771cd3a\") " pod="openstack/placement-db-create-vglmq" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.940621 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr8ds\" (UniqueName: \"kubernetes.io/projected/7b487b4e-5b03-439f-80f9-53c3e37121c8-kube-api-access-nr8ds\") pod \"placement-76cf-account-create-2z6wp\" (UID: \"7b487b4e-5b03-439f-80f9-53c3e37121c8\") " pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.940751 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b487b4e-5b03-439f-80f9-53c3e37121c8-operator-scripts\") pod \"placement-76cf-account-create-2z6wp\" (UID: \"7b487b4e-5b03-439f-80f9-53c3e37121c8\") " pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.941852 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b487b4e-5b03-439f-80f9-53c3e37121c8-operator-scripts\") pod \"placement-76cf-account-create-2z6wp\" (UID: \"7b487b4e-5b03-439f-80f9-53c3e37121c8\") " pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.941966 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9087e75c-690f-4f83-b63a-af99c771cd3a-operator-scripts\") pod \"placement-db-create-vglmq\" (UID: \"9087e75c-690f-4f83-b63a-af99c771cd3a\") " pod="openstack/placement-db-create-vglmq" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.959510 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr8ds\" (UniqueName: \"kubernetes.io/projected/7b487b4e-5b03-439f-80f9-53c3e37121c8-kube-api-access-nr8ds\") pod \"placement-76cf-account-create-2z6wp\" (UID: \"7b487b4e-5b03-439f-80f9-53c3e37121c8\") " pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:12 crc kubenswrapper[4688]: I1125 12:31:12.959717 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdrzz\" (UniqueName: \"kubernetes.io/projected/9087e75c-690f-4f83-b63a-af99c771cd3a-kube-api-access-bdrzz\") pod \"placement-db-create-vglmq\" (UID: \"9087e75c-690f-4f83-b63a-af99c771cd3a\") " pod="openstack/placement-db-create-vglmq" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.024077 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vglmq" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.098672 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kr2hz"] Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.099958 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.110749 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4d6a-account-create-9sw4q"] Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.111992 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.114388 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.121734 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kr2hz"] Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.121972 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.135061 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4d6a-account-create-9sw4q"] Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.193915 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.244923 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs9r4\" (UniqueName: \"kubernetes.io/projected/97b3f267-42b4-46bc-a0e1-195315d0a782-kube-api-access-gs9r4\") pod \"glance-db-create-kr2hz\" (UID: \"97b3f267-42b4-46bc-a0e1-195315d0a782\") " pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.244983 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797bab00-a9d6-4126-87d4-da7a46c3d318-operator-scripts\") pod \"glance-4d6a-account-create-9sw4q\" (UID: \"797bab00-a9d6-4126-87d4-da7a46c3d318\") " pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.245114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nts4n\" (UniqueName: \"kubernetes.io/projected/797bab00-a9d6-4126-87d4-da7a46c3d318-kube-api-access-nts4n\") pod \"glance-4d6a-account-create-9sw4q\" (UID: \"797bab00-a9d6-4126-87d4-da7a46c3d318\") " pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.245297 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b3f267-42b4-46bc-a0e1-195315d0a782-operator-scripts\") pod \"glance-db-create-kr2hz\" (UID: \"97b3f267-42b4-46bc-a0e1-195315d0a782\") " pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.348088 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs9r4\" (UniqueName: \"kubernetes.io/projected/97b3f267-42b4-46bc-a0e1-195315d0a782-kube-api-access-gs9r4\") pod \"glance-db-create-kr2hz\" (UID: \"97b3f267-42b4-46bc-a0e1-195315d0a782\") " pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.348183 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797bab00-a9d6-4126-87d4-da7a46c3d318-operator-scripts\") pod \"glance-4d6a-account-create-9sw4q\" (UID: \"797bab00-a9d6-4126-87d4-da7a46c3d318\") " pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.348281 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nts4n\" (UniqueName: \"kubernetes.io/projected/797bab00-a9d6-4126-87d4-da7a46c3d318-kube-api-access-nts4n\") pod \"glance-4d6a-account-create-9sw4q\" (UID: \"797bab00-a9d6-4126-87d4-da7a46c3d318\") " pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.348406 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b3f267-42b4-46bc-a0e1-195315d0a782-operator-scripts\") pod \"glance-db-create-kr2hz\" (UID: \"97b3f267-42b4-46bc-a0e1-195315d0a782\") " pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.349755 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b3f267-42b4-46bc-a0e1-195315d0a782-operator-scripts\") pod \"glance-db-create-kr2hz\" (UID: \"97b3f267-42b4-46bc-a0e1-195315d0a782\") " pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.351389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797bab00-a9d6-4126-87d4-da7a46c3d318-operator-scripts\") pod \"glance-4d6a-account-create-9sw4q\" (UID: \"797bab00-a9d6-4126-87d4-da7a46c3d318\") " pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.367424 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nts4n\" (UniqueName: \"kubernetes.io/projected/797bab00-a9d6-4126-87d4-da7a46c3d318-kube-api-access-nts4n\") pod \"glance-4d6a-account-create-9sw4q\" (UID: \"797bab00-a9d6-4126-87d4-da7a46c3d318\") " pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.370127 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs9r4\" (UniqueName: \"kubernetes.io/projected/97b3f267-42b4-46bc-a0e1-195315d0a782-kube-api-access-gs9r4\") pod \"glance-db-create-kr2hz\" (UID: \"97b3f267-42b4-46bc-a0e1-195315d0a782\") " pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.421333 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.431136 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:13 crc kubenswrapper[4688]: I1125 12:31:13.654016 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:13 crc kubenswrapper[4688]: E1125 12:31:13.654736 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 12:31:13 crc kubenswrapper[4688]: E1125 12:31:13.655045 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 12:31:13 crc kubenswrapper[4688]: E1125 12:31:13.655133 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift podName:000479f0-0b04-4867-989b-622c2e951f4b nodeName:}" failed. No retries permitted until 2025-11-25 12:31:21.655116642 +0000 UTC m=+1031.764745510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift") pod "swift-storage-0" (UID: "000479f0-0b04-4867-989b-622c2e951f4b") : configmap "swift-ring-files" not found Nov 25 12:31:14 crc kubenswrapper[4688]: I1125 12:31:14.100993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-brb2n" event={"ID":"a917ea03-c867-4449-a317-2ed904672efa","Type":"ContainerStarted","Data":"8902f9799b89feb510b78c06f839f9c86ceb651d88b6fa81a678488ace7a8732"} Nov 25 12:31:14 crc kubenswrapper[4688]: I1125 12:31:14.125237 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-brb2n" podStartSLOduration=1.85311679 podStartE2EDuration="5.125219042s" podCreationTimestamp="2025-11-25 12:31:09 +0000 UTC" firstStartedPulling="2025-11-25 12:31:10.435469885 +0000 UTC m=+1020.545098753" lastFinishedPulling="2025-11-25 12:31:13.707572137 +0000 UTC m=+1023.817201005" observedRunningTime="2025-11-25 12:31:14.12139723 +0000 UTC m=+1024.231026098" watchObservedRunningTime="2025-11-25 12:31:14.125219042 +0000 UTC m=+1024.234847910" Nov 25 12:31:14 crc kubenswrapper[4688]: I1125 12:31:14.198741 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vglmq"] Nov 25 12:31:14 crc kubenswrapper[4688]: W1125 12:31:14.202885 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod797bab00_a9d6_4126_87d4_da7a46c3d318.slice/crio-570a72bf3ad1b1535a8e2bf93419c52d3a7e22b8a8ed57aabb9732a83e638eba WatchSource:0}: Error finding container 570a72bf3ad1b1535a8e2bf93419c52d3a7e22b8a8ed57aabb9732a83e638eba: Status 404 returned error can't find the container with id 570a72bf3ad1b1535a8e2bf93419c52d3a7e22b8a8ed57aabb9732a83e638eba Nov 25 12:31:14 crc kubenswrapper[4688]: W1125 12:31:14.203108 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9087e75c_690f_4f83_b63a_af99c771cd3a.slice/crio-bdc7ed82fe302754188b44000d81b1bbec90bd9aaeb693615ad03a0dc8349bbf WatchSource:0}: Error finding container bdc7ed82fe302754188b44000d81b1bbec90bd9aaeb693615ad03a0dc8349bbf: Status 404 returned error can't find the container with id bdc7ed82fe302754188b44000d81b1bbec90bd9aaeb693615ad03a0dc8349bbf Nov 25 12:31:14 crc kubenswrapper[4688]: I1125 12:31:14.209483 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4d6a-account-create-9sw4q"] Nov 25 12:31:14 crc kubenswrapper[4688]: I1125 12:31:14.326898 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76cf-account-create-2z6wp"] Nov 25 12:31:14 crc kubenswrapper[4688]: I1125 12:31:14.339202 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8fq9b"] Nov 25 12:31:14 crc kubenswrapper[4688]: I1125 12:31:14.353400 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kr2hz"] Nov 25 12:31:14 crc kubenswrapper[4688]: I1125 12:31:14.360187 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-53c3-account-create-b6rlg"] Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.021685 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.073662 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vc4v4"] Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.074120 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" podUID="2c0e78db-622d-4795-99ee-7e2055d4449e" containerName="dnsmasq-dns" containerID="cri-o://8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a" gracePeriod=10 Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.112770 4688 generic.go:334] "Generic (PLEG): container finished" podID="97b3f267-42b4-46bc-a0e1-195315d0a782" containerID="e5cb545abc92b3e3e2ab7982202ffe13f3e8c5f2ffba02c37bdd4e9c295005c7" exitCode=0 Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.112853 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kr2hz" event={"ID":"97b3f267-42b4-46bc-a0e1-195315d0a782","Type":"ContainerDied","Data":"e5cb545abc92b3e3e2ab7982202ffe13f3e8c5f2ffba02c37bdd4e9c295005c7"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.112889 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kr2hz" event={"ID":"97b3f267-42b4-46bc-a0e1-195315d0a782","Type":"ContainerStarted","Data":"e54b0d779a2101d5061ff6c1d561e998efcb3089b0ff6868daa00975127b48d8"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.116330 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa319425-2bf3-486e-b80c-52d377a48462" containerID="3219f83158aa64779a6f470c70b3713565f284ac600de8b731574eec53073fb9" exitCode=0 Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.116490 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fq9b" event={"ID":"fa319425-2bf3-486e-b80c-52d377a48462","Type":"ContainerDied","Data":"3219f83158aa64779a6f470c70b3713565f284ac600de8b731574eec53073fb9"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.116515 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fq9b" event={"ID":"fa319425-2bf3-486e-b80c-52d377a48462","Type":"ContainerStarted","Data":"f8acf3d534b3d0fb126a594d0714d6fa9b9a42740119acb555ee261ff75c1bce"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.118348 4688 generic.go:334] "Generic (PLEG): container finished" podID="b4495873-670c-487e-9989-b6c65bbd0c04" containerID="987a1cf4240b583095ea403bcad3e3341f724add0adc0f0f2986f67e1cfc875b" exitCode=0 Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.118466 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-53c3-account-create-b6rlg" event={"ID":"b4495873-670c-487e-9989-b6c65bbd0c04","Type":"ContainerDied","Data":"987a1cf4240b583095ea403bcad3e3341f724add0adc0f0f2986f67e1cfc875b"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.118656 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-53c3-account-create-b6rlg" event={"ID":"b4495873-670c-487e-9989-b6c65bbd0c04","Type":"ContainerStarted","Data":"201bc9570ae30a51a6baf7aefa775e7e172e9db55de8613304aabea6e76eac81"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.120819 4688 generic.go:334] "Generic (PLEG): container finished" podID="797bab00-a9d6-4126-87d4-da7a46c3d318" containerID="1b944fb434524bd401c6ba3d3b183461a0de872fcba04560b791fd3ec3e98fd8" exitCode=0 Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.120908 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4d6a-account-create-9sw4q" event={"ID":"797bab00-a9d6-4126-87d4-da7a46c3d318","Type":"ContainerDied","Data":"1b944fb434524bd401c6ba3d3b183461a0de872fcba04560b791fd3ec3e98fd8"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.120941 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4d6a-account-create-9sw4q" event={"ID":"797bab00-a9d6-4126-87d4-da7a46c3d318","Type":"ContainerStarted","Data":"570a72bf3ad1b1535a8e2bf93419c52d3a7e22b8a8ed57aabb9732a83e638eba"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.123005 4688 generic.go:334] "Generic (PLEG): container finished" podID="7b487b4e-5b03-439f-80f9-53c3e37121c8" containerID="e6a626027b04d17da404309e4e66bf2873f532ce4108d039cf2b2c6253c40c27" exitCode=0 Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.123095 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76cf-account-create-2z6wp" event={"ID":"7b487b4e-5b03-439f-80f9-53c3e37121c8","Type":"ContainerDied","Data":"e6a626027b04d17da404309e4e66bf2873f532ce4108d039cf2b2c6253c40c27"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.123155 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76cf-account-create-2z6wp" event={"ID":"7b487b4e-5b03-439f-80f9-53c3e37121c8","Type":"ContainerStarted","Data":"b3da0b5a1356d3f84cf3dd63a13062f985432c5e9236cf704e33ea95a0ec2d06"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.125015 4688 generic.go:334] "Generic (PLEG): container finished" podID="9087e75c-690f-4f83-b63a-af99c771cd3a" containerID="9a5253ee3b047f77fca5b89fd4f9a5c11dc374e599b3f219fa0f2d09d9a73bc7" exitCode=0 Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.125887 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vglmq" event={"ID":"9087e75c-690f-4f83-b63a-af99c771cd3a","Type":"ContainerDied","Data":"9a5253ee3b047f77fca5b89fd4f9a5c11dc374e599b3f219fa0f2d09d9a73bc7"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.125925 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vglmq" event={"ID":"9087e75c-690f-4f83-b63a-af99c771cd3a","Type":"ContainerStarted","Data":"bdc7ed82fe302754188b44000d81b1bbec90bd9aaeb693615ad03a0dc8349bbf"} Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.530692 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.708321 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-nb\") pod \"2c0e78db-622d-4795-99ee-7e2055d4449e\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.708418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6wks\" (UniqueName: \"kubernetes.io/projected/2c0e78db-622d-4795-99ee-7e2055d4449e-kube-api-access-h6wks\") pod \"2c0e78db-622d-4795-99ee-7e2055d4449e\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.708454 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-dns-svc\") pod \"2c0e78db-622d-4795-99ee-7e2055d4449e\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.708582 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-config\") pod \"2c0e78db-622d-4795-99ee-7e2055d4449e\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.708653 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-sb\") pod \"2c0e78db-622d-4795-99ee-7e2055d4449e\" (UID: \"2c0e78db-622d-4795-99ee-7e2055d4449e\") " Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.718271 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0e78db-622d-4795-99ee-7e2055d4449e-kube-api-access-h6wks" (OuterVolumeSpecName: "kube-api-access-h6wks") pod "2c0e78db-622d-4795-99ee-7e2055d4449e" (UID: "2c0e78db-622d-4795-99ee-7e2055d4449e"). InnerVolumeSpecName "kube-api-access-h6wks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.746682 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-config" (OuterVolumeSpecName: "config") pod "2c0e78db-622d-4795-99ee-7e2055d4449e" (UID: "2c0e78db-622d-4795-99ee-7e2055d4449e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.747265 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c0e78db-622d-4795-99ee-7e2055d4449e" (UID: "2c0e78db-622d-4795-99ee-7e2055d4449e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.751919 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c0e78db-622d-4795-99ee-7e2055d4449e" (UID: "2c0e78db-622d-4795-99ee-7e2055d4449e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.753982 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c0e78db-622d-4795-99ee-7e2055d4449e" (UID: "2c0e78db-622d-4795-99ee-7e2055d4449e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.810050 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.810096 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.810111 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.810125 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6wks\" (UniqueName: \"kubernetes.io/projected/2c0e78db-622d-4795-99ee-7e2055d4449e-kube-api-access-h6wks\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:15 crc kubenswrapper[4688]: I1125 12:31:15.810139 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c0e78db-622d-4795-99ee-7e2055d4449e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.135027 4688 generic.go:334] "Generic (PLEG): container finished" podID="2c0e78db-622d-4795-99ee-7e2055d4449e" containerID="8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a" exitCode=0 Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.135110 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.135134 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" event={"ID":"2c0e78db-622d-4795-99ee-7e2055d4449e","Type":"ContainerDied","Data":"8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a"} Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.135951 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vc4v4" event={"ID":"2c0e78db-622d-4795-99ee-7e2055d4449e","Type":"ContainerDied","Data":"4881203707bd0409008e4793f2afd7426265332e863c06e58e0426df80333f1a"} Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.135985 4688 scope.go:117] "RemoveContainer" containerID="8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.177803 4688 scope.go:117] "RemoveContainer" containerID="ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.214548 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vc4v4"] Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.221060 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vc4v4"] Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.250648 4688 scope.go:117] "RemoveContainer" containerID="8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a" Nov 25 12:31:16 crc kubenswrapper[4688]: E1125 12:31:16.251032 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a\": container with ID starting with 8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a not found: ID does not exist" containerID="8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.251060 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a"} err="failed to get container status \"8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a\": rpc error: code = NotFound desc = could not find container \"8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a\": container with ID starting with 8d66d09401deab9c7a78a2104c8ad62dfc843a24207754222e75e342d9caaf0a not found: ID does not exist" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.251080 4688 scope.go:117] "RemoveContainer" containerID="ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47" Nov 25 12:31:16 crc kubenswrapper[4688]: E1125 12:31:16.251269 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47\": container with ID starting with ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47 not found: ID does not exist" containerID="ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.251290 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47"} err="failed to get container status \"ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47\": rpc error: code = NotFound desc = could not find container \"ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47\": container with ID starting with ea3f51452654819a9456ca68a1f16cca3bff04d9661b8fdda2a8a1d2c50f9b47 not found: ID does not exist" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.514037 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.629706 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs9r4\" (UniqueName: \"kubernetes.io/projected/97b3f267-42b4-46bc-a0e1-195315d0a782-kube-api-access-gs9r4\") pod \"97b3f267-42b4-46bc-a0e1-195315d0a782\" (UID: \"97b3f267-42b4-46bc-a0e1-195315d0a782\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.629780 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b3f267-42b4-46bc-a0e1-195315d0a782-operator-scripts\") pod \"97b3f267-42b4-46bc-a0e1-195315d0a782\" (UID: \"97b3f267-42b4-46bc-a0e1-195315d0a782\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.630862 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b3f267-42b4-46bc-a0e1-195315d0a782-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97b3f267-42b4-46bc-a0e1-195315d0a782" (UID: "97b3f267-42b4-46bc-a0e1-195315d0a782"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.635287 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b3f267-42b4-46bc-a0e1-195315d0a782-kube-api-access-gs9r4" (OuterVolumeSpecName: "kube-api-access-gs9r4") pod "97b3f267-42b4-46bc-a0e1-195315d0a782" (UID: "97b3f267-42b4-46bc-a0e1-195315d0a782"). InnerVolumeSpecName "kube-api-access-gs9r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.710549 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.719839 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.740550 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs9r4\" (UniqueName: \"kubernetes.io/projected/97b3f267-42b4-46bc-a0e1-195315d0a782-kube-api-access-gs9r4\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.740582 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b3f267-42b4-46bc-a0e1-195315d0a782-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.760610 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0e78db-622d-4795-99ee-7e2055d4449e" path="/var/lib/kubelet/pods/2c0e78db-622d-4795-99ee-7e2055d4449e/volumes" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.830415 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vglmq" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.843474 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797bab00-a9d6-4126-87d4-da7a46c3d318-operator-scripts\") pod \"797bab00-a9d6-4126-87d4-da7a46c3d318\" (UID: \"797bab00-a9d6-4126-87d4-da7a46c3d318\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.844235 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf69f\" (UniqueName: \"kubernetes.io/projected/b4495873-670c-487e-9989-b6c65bbd0c04-kube-api-access-tf69f\") pod \"b4495873-670c-487e-9989-b6c65bbd0c04\" (UID: \"b4495873-670c-487e-9989-b6c65bbd0c04\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.844011 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797bab00-a9d6-4126-87d4-da7a46c3d318-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "797bab00-a9d6-4126-87d4-da7a46c3d318" (UID: "797bab00-a9d6-4126-87d4-da7a46c3d318"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.844268 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nts4n\" (UniqueName: \"kubernetes.io/projected/797bab00-a9d6-4126-87d4-da7a46c3d318-kube-api-access-nts4n\") pod \"797bab00-a9d6-4126-87d4-da7a46c3d318\" (UID: \"797bab00-a9d6-4126-87d4-da7a46c3d318\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.844469 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4495873-670c-487e-9989-b6c65bbd0c04-operator-scripts\") pod \"b4495873-670c-487e-9989-b6c65bbd0c04\" (UID: \"b4495873-670c-487e-9989-b6c65bbd0c04\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.845296 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797bab00-a9d6-4126-87d4-da7a46c3d318-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.845954 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4495873-670c-487e-9989-b6c65bbd0c04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4495873-670c-487e-9989-b6c65bbd0c04" (UID: "b4495873-670c-487e-9989-b6c65bbd0c04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.846446 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.847540 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797bab00-a9d6-4126-87d4-da7a46c3d318-kube-api-access-nts4n" (OuterVolumeSpecName: "kube-api-access-nts4n") pod "797bab00-a9d6-4126-87d4-da7a46c3d318" (UID: "797bab00-a9d6-4126-87d4-da7a46c3d318"). InnerVolumeSpecName "kube-api-access-nts4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.848671 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4495873-670c-487e-9989-b6c65bbd0c04-kube-api-access-tf69f" (OuterVolumeSpecName: "kube-api-access-tf69f") pod "b4495873-670c-487e-9989-b6c65bbd0c04" (UID: "b4495873-670c-487e-9989-b6c65bbd0c04"). InnerVolumeSpecName "kube-api-access-tf69f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.895919 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.946496 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdrzz\" (UniqueName: \"kubernetes.io/projected/9087e75c-690f-4f83-b63a-af99c771cd3a-kube-api-access-bdrzz\") pod \"9087e75c-690f-4f83-b63a-af99c771cd3a\" (UID: \"9087e75c-690f-4f83-b63a-af99c771cd3a\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.946909 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9087e75c-690f-4f83-b63a-af99c771cd3a-operator-scripts\") pod \"9087e75c-690f-4f83-b63a-af99c771cd3a\" (UID: \"9087e75c-690f-4f83-b63a-af99c771cd3a\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.946945 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b487b4e-5b03-439f-80f9-53c3e37121c8-operator-scripts\") pod \"7b487b4e-5b03-439f-80f9-53c3e37121c8\" (UID: \"7b487b4e-5b03-439f-80f9-53c3e37121c8\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.947055 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr8ds\" (UniqueName: \"kubernetes.io/projected/7b487b4e-5b03-439f-80f9-53c3e37121c8-kube-api-access-nr8ds\") pod \"7b487b4e-5b03-439f-80f9-53c3e37121c8\" (UID: \"7b487b4e-5b03-439f-80f9-53c3e37121c8\") " Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.947408 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b487b4e-5b03-439f-80f9-53c3e37121c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b487b4e-5b03-439f-80f9-53c3e37121c8" (UID: "7b487b4e-5b03-439f-80f9-53c3e37121c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.947507 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf69f\" (UniqueName: \"kubernetes.io/projected/b4495873-670c-487e-9989-b6c65bbd0c04-kube-api-access-tf69f\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.947549 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nts4n\" (UniqueName: \"kubernetes.io/projected/797bab00-a9d6-4126-87d4-da7a46c3d318-kube-api-access-nts4n\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.947563 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b487b4e-5b03-439f-80f9-53c3e37121c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.947574 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4495873-670c-487e-9989-b6c65bbd0c04-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.947770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9087e75c-690f-4f83-b63a-af99c771cd3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9087e75c-690f-4f83-b63a-af99c771cd3a" (UID: "9087e75c-690f-4f83-b63a-af99c771cd3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.950510 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9087e75c-690f-4f83-b63a-af99c771cd3a-kube-api-access-bdrzz" (OuterVolumeSpecName: "kube-api-access-bdrzz") pod "9087e75c-690f-4f83-b63a-af99c771cd3a" (UID: "9087e75c-690f-4f83-b63a-af99c771cd3a"). InnerVolumeSpecName "kube-api-access-bdrzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:16 crc kubenswrapper[4688]: I1125 12:31:16.950629 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b487b4e-5b03-439f-80f9-53c3e37121c8-kube-api-access-nr8ds" (OuterVolumeSpecName: "kube-api-access-nr8ds") pod "7b487b4e-5b03-439f-80f9-53c3e37121c8" (UID: "7b487b4e-5b03-439f-80f9-53c3e37121c8"). InnerVolumeSpecName "kube-api-access-nr8ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.048711 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdl8s\" (UniqueName: \"kubernetes.io/projected/fa319425-2bf3-486e-b80c-52d377a48462-kube-api-access-kdl8s\") pod \"fa319425-2bf3-486e-b80c-52d377a48462\" (UID: \"fa319425-2bf3-486e-b80c-52d377a48462\") " Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.048797 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa319425-2bf3-486e-b80c-52d377a48462-operator-scripts\") pod \"fa319425-2bf3-486e-b80c-52d377a48462\" (UID: \"fa319425-2bf3-486e-b80c-52d377a48462\") " Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.049147 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9087e75c-690f-4f83-b63a-af99c771cd3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.049167 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr8ds\" (UniqueName: \"kubernetes.io/projected/7b487b4e-5b03-439f-80f9-53c3e37121c8-kube-api-access-nr8ds\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.049177 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdrzz\" (UniqueName: \"kubernetes.io/projected/9087e75c-690f-4f83-b63a-af99c771cd3a-kube-api-access-bdrzz\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.049513 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa319425-2bf3-486e-b80c-52d377a48462-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa319425-2bf3-486e-b80c-52d377a48462" (UID: "fa319425-2bf3-486e-b80c-52d377a48462"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.051921 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa319425-2bf3-486e-b80c-52d377a48462-kube-api-access-kdl8s" (OuterVolumeSpecName: "kube-api-access-kdl8s") pod "fa319425-2bf3-486e-b80c-52d377a48462" (UID: "fa319425-2bf3-486e-b80c-52d377a48462"). InnerVolumeSpecName "kube-api-access-kdl8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.143327 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vglmq" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.143320 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vglmq" event={"ID":"9087e75c-690f-4f83-b63a-af99c771cd3a","Type":"ContainerDied","Data":"bdc7ed82fe302754188b44000d81b1bbec90bd9aaeb693615ad03a0dc8349bbf"} Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.143479 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc7ed82fe302754188b44000d81b1bbec90bd9aaeb693615ad03a0dc8349bbf" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.144878 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kr2hz" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.144874 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kr2hz" event={"ID":"97b3f267-42b4-46bc-a0e1-195315d0a782","Type":"ContainerDied","Data":"e54b0d779a2101d5061ff6c1d561e998efcb3089b0ff6868daa00975127b48d8"} Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.145001 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e54b0d779a2101d5061ff6c1d561e998efcb3089b0ff6868daa00975127b48d8" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.146813 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fq9b" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.146817 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fq9b" event={"ID":"fa319425-2bf3-486e-b80c-52d377a48462","Type":"ContainerDied","Data":"f8acf3d534b3d0fb126a594d0714d6fa9b9a42740119acb555ee261ff75c1bce"} Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.147215 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8acf3d534b3d0fb126a594d0714d6fa9b9a42740119acb555ee261ff75c1bce" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.150485 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdl8s\" (UniqueName: \"kubernetes.io/projected/fa319425-2bf3-486e-b80c-52d377a48462-kube-api-access-kdl8s\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.150509 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa319425-2bf3-486e-b80c-52d377a48462-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.151114 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-53c3-account-create-b6rlg" event={"ID":"b4495873-670c-487e-9989-b6c65bbd0c04","Type":"ContainerDied","Data":"201bc9570ae30a51a6baf7aefa775e7e172e9db55de8613304aabea6e76eac81"} Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.151144 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201bc9570ae30a51a6baf7aefa775e7e172e9db55de8613304aabea6e76eac81" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.151123 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-53c3-account-create-b6rlg" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.152835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4d6a-account-create-9sw4q" event={"ID":"797bab00-a9d6-4126-87d4-da7a46c3d318","Type":"ContainerDied","Data":"570a72bf3ad1b1535a8e2bf93419c52d3a7e22b8a8ed57aabb9732a83e638eba"} Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.152862 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570a72bf3ad1b1535a8e2bf93419c52d3a7e22b8a8ed57aabb9732a83e638eba" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.152912 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4d6a-account-create-9sw4q" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.157415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76cf-account-create-2z6wp" event={"ID":"7b487b4e-5b03-439f-80f9-53c3e37121c8","Type":"ContainerDied","Data":"b3da0b5a1356d3f84cf3dd63a13062f985432c5e9236cf704e33ea95a0ec2d06"} Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.157441 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3da0b5a1356d3f84cf3dd63a13062f985432c5e9236cf704e33ea95a0ec2d06" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.157488 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76cf-account-create-2z6wp" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.853666 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.855176 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.855417 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.856726 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd4c77f22f04f95d12c0a6e31890a8c2be94485d18b032708ee7f7a088bd619a"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:31:17 crc kubenswrapper[4688]: I1125 12:31:17.856998 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://bd4c77f22f04f95d12c0a6e31890a8c2be94485d18b032708ee7f7a088bd619a" gracePeriod=600 Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.167045 4688 generic.go:334] "Generic (PLEG): container finished" podID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerID="291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45" exitCode=0 Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.167178 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e","Type":"ContainerDied","Data":"291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45"} Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273251 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rj4gb"] Nov 25 12:31:18 crc kubenswrapper[4688]: E1125 12:31:18.273662 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0e78db-622d-4795-99ee-7e2055d4449e" containerName="init" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273684 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0e78db-622d-4795-99ee-7e2055d4449e" containerName="init" Nov 25 12:31:18 crc kubenswrapper[4688]: E1125 12:31:18.273696 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0e78db-622d-4795-99ee-7e2055d4449e" containerName="dnsmasq-dns" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273704 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0e78db-622d-4795-99ee-7e2055d4449e" containerName="dnsmasq-dns" Nov 25 12:31:18 crc kubenswrapper[4688]: E1125 12:31:18.273719 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa319425-2bf3-486e-b80c-52d377a48462" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273727 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa319425-2bf3-486e-b80c-52d377a48462" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: E1125 12:31:18.273746 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4495873-670c-487e-9989-b6c65bbd0c04" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273753 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4495873-670c-487e-9989-b6c65bbd0c04" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: E1125 12:31:18.273773 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797bab00-a9d6-4126-87d4-da7a46c3d318" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273780 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="797bab00-a9d6-4126-87d4-da7a46c3d318" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: E1125 12:31:18.273799 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9087e75c-690f-4f83-b63a-af99c771cd3a" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273807 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9087e75c-690f-4f83-b63a-af99c771cd3a" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: E1125 12:31:18.273818 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b487b4e-5b03-439f-80f9-53c3e37121c8" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273826 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b487b4e-5b03-439f-80f9-53c3e37121c8" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: E1125 12:31:18.273838 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b3f267-42b4-46bc-a0e1-195315d0a782" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.273846 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b3f267-42b4-46bc-a0e1-195315d0a782" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.274019 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9087e75c-690f-4f83-b63a-af99c771cd3a" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.274032 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b3f267-42b4-46bc-a0e1-195315d0a782" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.274043 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b487b4e-5b03-439f-80f9-53c3e37121c8" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.274057 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="797bab00-a9d6-4126-87d4-da7a46c3d318" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.274073 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4495873-670c-487e-9989-b6c65bbd0c04" containerName="mariadb-account-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.274114 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0e78db-622d-4795-99ee-7e2055d4449e" containerName="dnsmasq-dns" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.274129 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa319425-2bf3-486e-b80c-52d377a48462" containerName="mariadb-database-create" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.275059 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.277991 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cc2v5" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.278038 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.282958 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rj4gb"] Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.378111 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpjz\" (UniqueName: \"kubernetes.io/projected/79d582d3-e8f8-49a4-a48c-9c07f6083db5-kube-api-access-7tpjz\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.378192 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-db-sync-config-data\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.378264 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-combined-ca-bundle\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.378319 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-config-data\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.479395 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tpjz\" (UniqueName: \"kubernetes.io/projected/79d582d3-e8f8-49a4-a48c-9c07f6083db5-kube-api-access-7tpjz\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.479452 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-db-sync-config-data\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.479480 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-combined-ca-bundle\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.479511 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-config-data\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.489115 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-db-sync-config-data\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.489136 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-config-data\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.489329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-combined-ca-bundle\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.500091 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tpjz\" (UniqueName: \"kubernetes.io/projected/79d582d3-e8f8-49a4-a48c-9c07f6083db5-kube-api-access-7tpjz\") pod \"glance-db-sync-rj4gb\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:18 crc kubenswrapper[4688]: I1125 12:31:18.740298 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.175960 4688 generic.go:334] "Generic (PLEG): container finished" podID="dbef45ff-afce-462a-8835-30339db0f5a0" containerID="f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89" exitCode=0 Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.176026 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbef45ff-afce-462a-8835-30339db0f5a0","Type":"ContainerDied","Data":"f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89"} Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.178816 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e","Type":"ContainerStarted","Data":"2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2"} Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.179365 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.181995 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="bd4c77f22f04f95d12c0a6e31890a8c2be94485d18b032708ee7f7a088bd619a" exitCode=0 Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.182036 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"bd4c77f22f04f95d12c0a6e31890a8c2be94485d18b032708ee7f7a088bd619a"} Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.182312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"606e3c186faf0a77643eeee31f20e3a41380a34fa45ab23f8be805001dd713d2"} Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.182335 4688 scope.go:117] "RemoveContainer" containerID="f8fce2e1ba2b0b0a8ccd7a9e7c79c4f46ac3a4e41d62d29310173c9c94b065de" Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.235599 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.45184886 podStartE2EDuration="52.235579527s" podCreationTimestamp="2025-11-25 12:30:27 +0000 UTC" firstStartedPulling="2025-11-25 12:30:33.76971527 +0000 UTC m=+983.879344138" lastFinishedPulling="2025-11-25 12:30:44.553445937 +0000 UTC m=+994.663074805" observedRunningTime="2025-11-25 12:31:19.234770575 +0000 UTC m=+1029.344399463" watchObservedRunningTime="2025-11-25 12:31:19.235579527 +0000 UTC m=+1029.345208395" Nov 25 12:31:19 crc kubenswrapper[4688]: I1125 12:31:19.321786 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rj4gb"] Nov 25 12:31:20 crc kubenswrapper[4688]: I1125 12:31:20.192157 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbef45ff-afce-462a-8835-30339db0f5a0","Type":"ContainerStarted","Data":"d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d"} Nov 25 12:31:20 crc kubenswrapper[4688]: I1125 12:31:20.192593 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:31:20 crc kubenswrapper[4688]: I1125 12:31:20.193810 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rj4gb" event={"ID":"79d582d3-e8f8-49a4-a48c-9c07f6083db5","Type":"ContainerStarted","Data":"82b765e94634ea582a1bd3801742ddeb201aa86e5f7ee5b56a288fe6b17a87e9"} Nov 25 12:31:20 crc kubenswrapper[4688]: I1125 12:31:20.220134 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.220118645 podStartE2EDuration="52.220118645s" podCreationTimestamp="2025-11-25 12:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:31:20.215349028 +0000 UTC m=+1030.324977896" watchObservedRunningTime="2025-11-25 12:31:20.220118645 +0000 UTC m=+1030.329747513" Nov 25 12:31:21 crc kubenswrapper[4688]: I1125 12:31:21.207620 4688 generic.go:334] "Generic (PLEG): container finished" podID="a917ea03-c867-4449-a317-2ed904672efa" containerID="8902f9799b89feb510b78c06f839f9c86ceb651d88b6fa81a678488ace7a8732" exitCode=0 Nov 25 12:31:21 crc kubenswrapper[4688]: I1125 12:31:21.207696 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-brb2n" event={"ID":"a917ea03-c867-4449-a317-2ed904672efa","Type":"ContainerDied","Data":"8902f9799b89feb510b78c06f839f9c86ceb651d88b6fa81a678488ace7a8732"} Nov 25 12:31:21 crc kubenswrapper[4688]: I1125 12:31:21.737485 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:21 crc kubenswrapper[4688]: I1125 12:31:21.747684 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/000479f0-0b04-4867-989b-622c2e951f4b-etc-swift\") pod \"swift-storage-0\" (UID: \"000479f0-0b04-4867-989b-622c2e951f4b\") " pod="openstack/swift-storage-0" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.003641 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.549342 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.645439 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.652498 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-swiftconf\") pod \"a917ea03-c867-4449-a317-2ed904672efa\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.652569 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqrcw\" (UniqueName: \"kubernetes.io/projected/a917ea03-c867-4449-a317-2ed904672efa-kube-api-access-zqrcw\") pod \"a917ea03-c867-4449-a317-2ed904672efa\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.652614 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-combined-ca-bundle\") pod \"a917ea03-c867-4449-a317-2ed904672efa\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.652659 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a917ea03-c867-4449-a317-2ed904672efa-etc-swift\") pod \"a917ea03-c867-4449-a317-2ed904672efa\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.652732 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-scripts\") pod \"a917ea03-c867-4449-a317-2ed904672efa\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.652754 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-ring-data-devices\") pod \"a917ea03-c867-4449-a317-2ed904672efa\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.652778 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-dispersionconf\") pod \"a917ea03-c867-4449-a317-2ed904672efa\" (UID: \"a917ea03-c867-4449-a317-2ed904672efa\") " Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.655638 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a917ea03-c867-4449-a317-2ed904672efa" (UID: "a917ea03-c867-4449-a317-2ed904672efa"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.658477 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a917ea03-c867-4449-a317-2ed904672efa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a917ea03-c867-4449-a317-2ed904672efa" (UID: "a917ea03-c867-4449-a317-2ed904672efa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.661695 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a917ea03-c867-4449-a317-2ed904672efa-kube-api-access-zqrcw" (OuterVolumeSpecName: "kube-api-access-zqrcw") pod "a917ea03-c867-4449-a317-2ed904672efa" (UID: "a917ea03-c867-4449-a317-2ed904672efa"). InnerVolumeSpecName "kube-api-access-zqrcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.665474 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a917ea03-c867-4449-a317-2ed904672efa" (UID: "a917ea03-c867-4449-a317-2ed904672efa"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.683419 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-scripts" (OuterVolumeSpecName: "scripts") pod "a917ea03-c867-4449-a317-2ed904672efa" (UID: "a917ea03-c867-4449-a317-2ed904672efa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.684654 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a917ea03-c867-4449-a317-2ed904672efa" (UID: "a917ea03-c867-4449-a317-2ed904672efa"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.689427 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a917ea03-c867-4449-a317-2ed904672efa" (UID: "a917ea03-c867-4449-a317-2ed904672efa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.734333 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.754404 4688 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.754446 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqrcw\" (UniqueName: \"kubernetes.io/projected/a917ea03-c867-4449-a317-2ed904672efa-kube-api-access-zqrcw\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.754461 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.754472 4688 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a917ea03-c867-4449-a317-2ed904672efa-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.754484 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.754494 4688 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a917ea03-c867-4449-a317-2ed904672efa-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:22 crc kubenswrapper[4688]: I1125 12:31:22.754504 4688 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a917ea03-c867-4449-a317-2ed904672efa-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:23 crc kubenswrapper[4688]: I1125 12:31:23.228897 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-brb2n" event={"ID":"a917ea03-c867-4449-a317-2ed904672efa","Type":"ContainerDied","Data":"d44ddf30159416ac9b985572d19d3ff1e2615e9fe7eee3a531f60bdc68f668d5"} Nov 25 12:31:23 crc kubenswrapper[4688]: I1125 12:31:23.229272 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44ddf30159416ac9b985572d19d3ff1e2615e9fe7eee3a531f60bdc68f668d5" Nov 25 12:31:23 crc kubenswrapper[4688]: I1125 12:31:23.228929 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-brb2n" Nov 25 12:31:23 crc kubenswrapper[4688]: I1125 12:31:23.230799 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"69be3a43f1a331af606c2bd404fe1a664b3bbe1f100e5ea2f9033fb87e4b6b04"} Nov 25 12:31:24 crc kubenswrapper[4688]: I1125 12:31:24.240681 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"4d8fcf747d1fe6faf33e75fd854399ea7b54925d17f6c94942e55b3d47cc4933"} Nov 25 12:31:24 crc kubenswrapper[4688]: I1125 12:31:24.240973 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"01af11430298a06ae6140e3d0568426ec2c4727b5738a29657396ac5d513bb46"} Nov 25 12:31:27 crc kubenswrapper[4688]: I1125 12:31:27.994418 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-92hwc" podUID="4ac126ff-ac63-4d6a-b201-e6dbd8ba3153" containerName="ovn-controller" probeResult="failure" output=< Nov 25 12:31:27 crc kubenswrapper[4688]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 12:31:27 crc kubenswrapper[4688]: > Nov 25 12:31:28 crc kubenswrapper[4688]: I1125 12:31:28.019403 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.233252 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.503760 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.603092 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-dbv7f"] Nov 25 12:31:29 crc kubenswrapper[4688]: E1125 12:31:29.604124 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a917ea03-c867-4449-a317-2ed904672efa" containerName="swift-ring-rebalance" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.605243 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a917ea03-c867-4449-a317-2ed904672efa" containerName="swift-ring-rebalance" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.605688 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a917ea03-c867-4449-a317-2ed904672efa" containerName="swift-ring-rebalance" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.606839 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.619189 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-dbv7f"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.684746 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d85141d-b092-4062-a640-0192faa87846-operator-scripts\") pod \"heat-db-create-dbv7f\" (UID: \"0d85141d-b092-4062-a640-0192faa87846\") " pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.685234 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhbt9\" (UniqueName: \"kubernetes.io/projected/0d85141d-b092-4062-a640-0192faa87846-kube-api-access-hhbt9\") pod \"heat-db-create-dbv7f\" (UID: \"0d85141d-b092-4062-a640-0192faa87846\") " pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.685863 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-j2kst"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.686846 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.700584 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j2kst"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.787373 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01cff46f-6371-4d95-b13d-e9b7a337c230-operator-scripts\") pod \"cinder-db-create-j2kst\" (UID: \"01cff46f-6371-4d95-b13d-e9b7a337c230\") " pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.787473 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flt2v\" (UniqueName: \"kubernetes.io/projected/01cff46f-6371-4d95-b13d-e9b7a337c230-kube-api-access-flt2v\") pod \"cinder-db-create-j2kst\" (UID: \"01cff46f-6371-4d95-b13d-e9b7a337c230\") " pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.787503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d85141d-b092-4062-a640-0192faa87846-operator-scripts\") pod \"heat-db-create-dbv7f\" (UID: \"0d85141d-b092-4062-a640-0192faa87846\") " pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.787551 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhbt9\" (UniqueName: \"kubernetes.io/projected/0d85141d-b092-4062-a640-0192faa87846-kube-api-access-hhbt9\") pod \"heat-db-create-dbv7f\" (UID: \"0d85141d-b092-4062-a640-0192faa87846\") " pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.788324 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d85141d-b092-4062-a640-0192faa87846-operator-scripts\") pod \"heat-db-create-dbv7f\" (UID: \"0d85141d-b092-4062-a640-0192faa87846\") " pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.813371 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7cdc-account-create-rwhtz"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.826714 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.830357 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.847025 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhbt9\" (UniqueName: \"kubernetes.io/projected/0d85141d-b092-4062-a640-0192faa87846-kube-api-access-hhbt9\") pod \"heat-db-create-dbv7f\" (UID: \"0d85141d-b092-4062-a640-0192faa87846\") " pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.869982 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7cdc-account-create-rwhtz"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.894984 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-operator-scripts\") pod \"barbican-7cdc-account-create-rwhtz\" (UID: \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\") " pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.895426 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01cff46f-6371-4d95-b13d-e9b7a337c230-operator-scripts\") pod \"cinder-db-create-j2kst\" (UID: \"01cff46f-6371-4d95-b13d-e9b7a337c230\") " pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.895711 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flt2v\" (UniqueName: \"kubernetes.io/projected/01cff46f-6371-4d95-b13d-e9b7a337c230-kube-api-access-flt2v\") pod \"cinder-db-create-j2kst\" (UID: \"01cff46f-6371-4d95-b13d-e9b7a337c230\") " pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.895959 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9tc\" (UniqueName: \"kubernetes.io/projected/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-kube-api-access-fg9tc\") pod \"barbican-7cdc-account-create-rwhtz\" (UID: \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\") " pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.897332 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01cff46f-6371-4d95-b13d-e9b7a337c230-operator-scripts\") pod \"cinder-db-create-j2kst\" (UID: \"01cff46f-6371-4d95-b13d-e9b7a337c230\") " pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.906276 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bz5zq"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.917431 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.936970 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bz5zq"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.945620 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.946747 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-ee49-account-create-z8krp"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.947762 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flt2v\" (UniqueName: \"kubernetes.io/projected/01cff46f-6371-4d95-b13d-e9b7a337c230-kube-api-access-flt2v\") pod \"cinder-db-create-j2kst\" (UID: \"01cff46f-6371-4d95-b13d-e9b7a337c230\") " pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.963066 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-ee49-account-create-z8krp"] Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.963188 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.965995 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.997743 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eb3a057-3576-4c65-a327-c3325780d24a-operator-scripts\") pod \"barbican-db-create-bz5zq\" (UID: \"9eb3a057-3576-4c65-a327-c3325780d24a\") " pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.997812 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg9tc\" (UniqueName: \"kubernetes.io/projected/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-kube-api-access-fg9tc\") pod \"barbican-7cdc-account-create-rwhtz\" (UID: \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\") " pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.997902 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-operator-scripts\") pod \"barbican-7cdc-account-create-rwhtz\" (UID: \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\") " pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.997943 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl8cs\" (UniqueName: \"kubernetes.io/projected/9eb3a057-3576-4c65-a327-c3325780d24a-kube-api-access-xl8cs\") pod \"barbican-db-create-bz5zq\" (UID: \"9eb3a057-3576-4c65-a327-c3325780d24a\") " pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:29 crc kubenswrapper[4688]: I1125 12:31:29.999749 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-operator-scripts\") pod \"barbican-7cdc-account-create-rwhtz\" (UID: \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\") " pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.007252 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.011080 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rqdck"] Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.012192 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.016046 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.016236 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f7h8n" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.016365 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.016245 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.024731 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rqdck"] Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.030292 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg9tc\" (UniqueName: \"kubernetes.io/projected/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-kube-api-access-fg9tc\") pod \"barbican-7cdc-account-create-rwhtz\" (UID: \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\") " pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.097793 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rn78b"] Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.098794 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.099350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eb3a057-3576-4c65-a327-c3325780d24a-operator-scripts\") pod \"barbican-db-create-bz5zq\" (UID: \"9eb3a057-3576-4c65-a327-c3325780d24a\") " pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.099458 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-config-data\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.099565 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2rw\" (UniqueName: \"kubernetes.io/projected/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-kube-api-access-4k2rw\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.099635 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fpv7\" (UniqueName: \"kubernetes.io/projected/f060668a-f36b-47f8-88eb-f7ecc06a491c-kube-api-access-8fpv7\") pod \"heat-ee49-account-create-z8krp\" (UID: \"f060668a-f36b-47f8-88eb-f7ecc06a491c\") " pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.099729 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f060668a-f36b-47f8-88eb-f7ecc06a491c-operator-scripts\") pod \"heat-ee49-account-create-z8krp\" (UID: \"f060668a-f36b-47f8-88eb-f7ecc06a491c\") " pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.099811 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl8cs\" (UniqueName: \"kubernetes.io/projected/9eb3a057-3576-4c65-a327-c3325780d24a-kube-api-access-xl8cs\") pod \"barbican-db-create-bz5zq\" (UID: \"9eb3a057-3576-4c65-a327-c3325780d24a\") " pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.099832 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-combined-ca-bundle\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.100359 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eb3a057-3576-4c65-a327-c3325780d24a-operator-scripts\") pod \"barbican-db-create-bz5zq\" (UID: \"9eb3a057-3576-4c65-a327-c3325780d24a\") " pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.115229 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cab6-account-create-v69wv"] Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.116503 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.119184 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.121424 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl8cs\" (UniqueName: \"kubernetes.io/projected/9eb3a057-3576-4c65-a327-c3325780d24a-kube-api-access-xl8cs\") pod \"barbican-db-create-bz5zq\" (UID: \"9eb3a057-3576-4c65-a327-c3325780d24a\") " pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.131596 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rn78b"] Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.151911 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cab6-account-create-v69wv"] Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201246 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495ae3d-a5dd-4c69-86d8-6081174afdc0-operator-scripts\") pod \"neutron-db-create-rn78b\" (UID: \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\") " pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201325 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-config-data\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201360 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k2rw\" (UniqueName: \"kubernetes.io/projected/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-kube-api-access-4k2rw\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201397 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fpv7\" (UniqueName: \"kubernetes.io/projected/f060668a-f36b-47f8-88eb-f7ecc06a491c-kube-api-access-8fpv7\") pod \"heat-ee49-account-create-z8krp\" (UID: \"f060668a-f36b-47f8-88eb-f7ecc06a491c\") " pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201418 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6955\" (UniqueName: \"kubernetes.io/projected/2495ae3d-a5dd-4c69-86d8-6081174afdc0-kube-api-access-h6955\") pod \"neutron-db-create-rn78b\" (UID: \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\") " pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201459 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38e9927b-64ba-43f5-a5e7-2061e1288e8d-operator-scripts\") pod \"cinder-cab6-account-create-v69wv\" (UID: \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\") " pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201478 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qc6\" (UniqueName: \"kubernetes.io/projected/38e9927b-64ba-43f5-a5e7-2061e1288e8d-kube-api-access-44qc6\") pod \"cinder-cab6-account-create-v69wv\" (UID: \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\") " pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201513 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f060668a-f36b-47f8-88eb-f7ecc06a491c-operator-scripts\") pod \"heat-ee49-account-create-z8krp\" (UID: \"f060668a-f36b-47f8-88eb-f7ecc06a491c\") " pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.201546 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-combined-ca-bundle\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.202455 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f060668a-f36b-47f8-88eb-f7ecc06a491c-operator-scripts\") pod \"heat-ee49-account-create-z8krp\" (UID: \"f060668a-f36b-47f8-88eb-f7ecc06a491c\") " pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.206192 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-combined-ca-bundle\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.206278 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-config-data\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.223759 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fpv7\" (UniqueName: \"kubernetes.io/projected/f060668a-f36b-47f8-88eb-f7ecc06a491c-kube-api-access-8fpv7\") pod \"heat-ee49-account-create-z8krp\" (UID: \"f060668a-f36b-47f8-88eb-f7ecc06a491c\") " pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.226694 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k2rw\" (UniqueName: \"kubernetes.io/projected/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-kube-api-access-4k2rw\") pod \"keystone-db-sync-rqdck\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.228148 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.302367 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.302959 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495ae3d-a5dd-4c69-86d8-6081174afdc0-operator-scripts\") pod \"neutron-db-create-rn78b\" (UID: \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\") " pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.303062 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6955\" (UniqueName: \"kubernetes.io/projected/2495ae3d-a5dd-4c69-86d8-6081174afdc0-kube-api-access-h6955\") pod \"neutron-db-create-rn78b\" (UID: \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\") " pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.303102 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38e9927b-64ba-43f5-a5e7-2061e1288e8d-operator-scripts\") pod \"cinder-cab6-account-create-v69wv\" (UID: \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\") " pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.303121 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44qc6\" (UniqueName: \"kubernetes.io/projected/38e9927b-64ba-43f5-a5e7-2061e1288e8d-kube-api-access-44qc6\") pod \"cinder-cab6-account-create-v69wv\" (UID: \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\") " pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.304226 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38e9927b-64ba-43f5-a5e7-2061e1288e8d-operator-scripts\") pod \"cinder-cab6-account-create-v69wv\" (UID: \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\") " pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.304264 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495ae3d-a5dd-4c69-86d8-6081174afdc0-operator-scripts\") pod \"neutron-db-create-rn78b\" (UID: \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\") " pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.316025 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.320193 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44qc6\" (UniqueName: \"kubernetes.io/projected/38e9927b-64ba-43f5-a5e7-2061e1288e8d-kube-api-access-44qc6\") pod \"cinder-cab6-account-create-v69wv\" (UID: \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\") " pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.321130 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6955\" (UniqueName: \"kubernetes.io/projected/2495ae3d-a5dd-4c69-86d8-6081174afdc0-kube-api-access-h6955\") pod \"neutron-db-create-rn78b\" (UID: \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\") " pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.371535 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.393881 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4668-account-create-vvbfr"] Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.395079 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.397462 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.406118 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4668-account-create-vvbfr"] Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.416485 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.476978 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.506924 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgs6\" (UniqueName: \"kubernetes.io/projected/0f5088e7-042f-48f0-97d3-d97e820ac314-kube-api-access-jsgs6\") pod \"neutron-4668-account-create-vvbfr\" (UID: \"0f5088e7-042f-48f0-97d3-d97e820ac314\") " pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.506980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f5088e7-042f-48f0-97d3-d97e820ac314-operator-scripts\") pod \"neutron-4668-account-create-vvbfr\" (UID: \"0f5088e7-042f-48f0-97d3-d97e820ac314\") " pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.609290 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsgs6\" (UniqueName: \"kubernetes.io/projected/0f5088e7-042f-48f0-97d3-d97e820ac314-kube-api-access-jsgs6\") pod \"neutron-4668-account-create-vvbfr\" (UID: \"0f5088e7-042f-48f0-97d3-d97e820ac314\") " pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.609339 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f5088e7-042f-48f0-97d3-d97e820ac314-operator-scripts\") pod \"neutron-4668-account-create-vvbfr\" (UID: \"0f5088e7-042f-48f0-97d3-d97e820ac314\") " pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.610183 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f5088e7-042f-48f0-97d3-d97e820ac314-operator-scripts\") pod \"neutron-4668-account-create-vvbfr\" (UID: \"0f5088e7-042f-48f0-97d3-d97e820ac314\") " pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.627863 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsgs6\" (UniqueName: \"kubernetes.io/projected/0f5088e7-042f-48f0-97d3-d97e820ac314-kube-api-access-jsgs6\") pod \"neutron-4668-account-create-vvbfr\" (UID: \"0f5088e7-042f-48f0-97d3-d97e820ac314\") " pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:30 crc kubenswrapper[4688]: I1125 12:31:30.713811 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:32 crc kubenswrapper[4688]: I1125 12:31:32.982102 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-92hwc" podUID="4ac126ff-ac63-4d6a-b201-e6dbd8ba3153" containerName="ovn-controller" probeResult="failure" output=< Nov 25 12:31:32 crc kubenswrapper[4688]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 12:31:32 crc kubenswrapper[4688]: > Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.147846 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-nndbf" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.383841 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-92hwc-config-thmq5"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.385297 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.388169 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.417212 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-92hwc-config-thmq5"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.459296 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.459369 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-additional-scripts\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.459455 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run-ovn\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.459489 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q46m7\" (UniqueName: \"kubernetes.io/projected/b5ee6b67-09c3-4858-b121-c579ab266af7-kube-api-access-q46m7\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.459512 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-scripts\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.459560 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-log-ovn\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.562622 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-additional-scripts\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563075 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563085 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run-ovn\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563173 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run-ovn\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563191 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46m7\" (UniqueName: \"kubernetes.io/projected/b5ee6b67-09c3-4858-b121-c579ab266af7-kube-api-access-q46m7\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563218 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-scripts\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563259 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-log-ovn\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563393 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-log-ovn\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.563873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-additional-scripts\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.565403 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-scripts\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.568910 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-ee49-account-create-z8krp"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.581291 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4668-account-create-vvbfr"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.590799 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46m7\" (UniqueName: \"kubernetes.io/projected/b5ee6b67-09c3-4858-b121-c579ab266af7-kube-api-access-q46m7\") pod \"ovn-controller-92hwc-config-thmq5\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: W1125 12:31:33.621239 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f5088e7_042f_48f0_97d3_d97e820ac314.slice/crio-cc96c6a81ee3658112aebbc6e0b870b39fcf612f91bdc4de0c3b7ac37a0e17e8 WatchSource:0}: Error finding container cc96c6a81ee3658112aebbc6e0b870b39fcf612f91bdc4de0c3b7ac37a0e17e8: Status 404 returned error can't find the container with id cc96c6a81ee3658112aebbc6e0b870b39fcf612f91bdc4de0c3b7ac37a0e17e8 Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.686658 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rn78b"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.706121 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cab6-account-create-v69wv"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.742480 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.798681 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bz5zq"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.826428 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-dbv7f"] Nov 25 12:31:33 crc kubenswrapper[4688]: W1125 12:31:33.833042 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb3a057_3576_4c65_a327_c3325780d24a.slice/crio-ff48dd93d4d28a8ee5f6ef0269008a833f9b8f8dc1f7f070e55ce972d00b911f WatchSource:0}: Error finding container ff48dd93d4d28a8ee5f6ef0269008a833f9b8f8dc1f7f070e55ce972d00b911f: Status 404 returned error can't find the container with id ff48dd93d4d28a8ee5f6ef0269008a833f9b8f8dc1f7f070e55ce972d00b911f Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.835733 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j2kst"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.845573 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7cdc-account-create-rwhtz"] Nov 25 12:31:33 crc kubenswrapper[4688]: I1125 12:31:33.857029 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rqdck"] Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.319838 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-92hwc-config-thmq5"] Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.347086 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rn78b" event={"ID":"2495ae3d-a5dd-4c69-86d8-6081174afdc0","Type":"ContainerStarted","Data":"1f3c6cc0ac036d43dba0941ba72b325033b797a7921eb00abe66accfa7eebac8"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.368352 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j2kst" event={"ID":"01cff46f-6371-4d95-b13d-e9b7a337c230","Type":"ContainerStarted","Data":"2346c50e8bf7732ba8667658464a5daa2ded48fdb67046d7b4eba77f4517ed82"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.423804 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dbv7f" event={"ID":"0d85141d-b092-4062-a640-0192faa87846","Type":"ContainerStarted","Data":"66313cc67818af14741e47c379b6ca3efa8ae3459fbe8a765f6814121b511cd8"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.446127 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"f28d947a86b299a060e9bcccc76702ee926345d95d23af80b965fb7b5077fcf1"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.447450 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bz5zq" event={"ID":"9eb3a057-3576-4c65-a327-c3325780d24a","Type":"ContainerStarted","Data":"ff48dd93d4d28a8ee5f6ef0269008a833f9b8f8dc1f7f070e55ce972d00b911f"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.451463 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rqdck" event={"ID":"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0","Type":"ContainerStarted","Data":"b3fc0cbfc5c463b3912540d17616396c5c2e1a18d594ac237ad137bcb9c4b813"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.457207 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cab6-account-create-v69wv" event={"ID":"38e9927b-64ba-43f5-a5e7-2061e1288e8d","Type":"ContainerStarted","Data":"1deaf06e1c691992166c9d06a393c94c75856cfd1af9fee80fc8b4b87bf101ff"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.458330 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7cdc-account-create-rwhtz" event={"ID":"3a0ba682-5263-45cb-afc5-ca4d3f4b1354","Type":"ContainerStarted","Data":"9b34ba575c3cd423da727a4ffb79e7aab7442be1147f9128162fad0b7bb06220"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.459554 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4668-account-create-vvbfr" event={"ID":"0f5088e7-042f-48f0-97d3-d97e820ac314","Type":"ContainerStarted","Data":"847eaa7616d949709a425dc2e5d264b7251af5c0fe2422123a7947060f74883e"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.459576 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4668-account-create-vvbfr" event={"ID":"0f5088e7-042f-48f0-97d3-d97e820ac314","Type":"ContainerStarted","Data":"cc96c6a81ee3658112aebbc6e0b870b39fcf612f91bdc4de0c3b7ac37a0e17e8"} Nov 25 12:31:34 crc kubenswrapper[4688]: I1125 12:31:34.460715 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ee49-account-create-z8krp" event={"ID":"f060668a-f36b-47f8-88eb-f7ecc06a491c","Type":"ContainerStarted","Data":"2a7670d00675ff3a9f9a858411de7f5215404a9d552614545e1da777bc05d308"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.472355 4688 generic.go:334] "Generic (PLEG): container finished" podID="01cff46f-6371-4d95-b13d-e9b7a337c230" containerID="dddc26a9673cd0b63c6b252ccbcb05906ac2da7548c27fa4ca71a94e9cfd2466" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.472409 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j2kst" event={"ID":"01cff46f-6371-4d95-b13d-e9b7a337c230","Type":"ContainerDied","Data":"dddc26a9673cd0b63c6b252ccbcb05906ac2da7548c27fa4ca71a94e9cfd2466"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.474318 4688 generic.go:334] "Generic (PLEG): container finished" podID="38e9927b-64ba-43f5-a5e7-2061e1288e8d" containerID="f5ed01d2a641e9b680adf3b55c7b5fd30ec0001d963087225306d8b4bd512dd6" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.474403 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cab6-account-create-v69wv" event={"ID":"38e9927b-64ba-43f5-a5e7-2061e1288e8d","Type":"ContainerDied","Data":"f5ed01d2a641e9b680adf3b55c7b5fd30ec0001d963087225306d8b4bd512dd6"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.476798 4688 generic.go:334] "Generic (PLEG): container finished" podID="0d85141d-b092-4062-a640-0192faa87846" containerID="3356c367703815ea150351dc0608ba763e9f09b02eab2e355ff043cf011ecaca" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.476855 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dbv7f" event={"ID":"0d85141d-b092-4062-a640-0192faa87846","Type":"ContainerDied","Data":"3356c367703815ea150351dc0608ba763e9f09b02eab2e355ff043cf011ecaca"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.478592 4688 generic.go:334] "Generic (PLEG): container finished" podID="b5ee6b67-09c3-4858-b121-c579ab266af7" containerID="f91547d0c5aff630284efc79b587b50b2309c1363b1038b4659296ff93dc954d" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.478649 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-92hwc-config-thmq5" event={"ID":"b5ee6b67-09c3-4858-b121-c579ab266af7","Type":"ContainerDied","Data":"f91547d0c5aff630284efc79b587b50b2309c1363b1038b4659296ff93dc954d"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.478691 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-92hwc-config-thmq5" event={"ID":"b5ee6b67-09c3-4858-b121-c579ab266af7","Type":"ContainerStarted","Data":"f972433657ac37da01fff9520ade2e2d42027529538d7a0b3fcd9697a8a8d1ef"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.483215 4688 generic.go:334] "Generic (PLEG): container finished" podID="0f5088e7-042f-48f0-97d3-d97e820ac314" containerID="847eaa7616d949709a425dc2e5d264b7251af5c0fe2422123a7947060f74883e" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.483380 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4668-account-create-vvbfr" event={"ID":"0f5088e7-042f-48f0-97d3-d97e820ac314","Type":"ContainerDied","Data":"847eaa7616d949709a425dc2e5d264b7251af5c0fe2422123a7947060f74883e"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.486992 4688 generic.go:334] "Generic (PLEG): container finished" podID="2495ae3d-a5dd-4c69-86d8-6081174afdc0" containerID="e7fc7d93e345f4c1714fed482ff9eadcafd4d0b0c2bef4511c1c69c973d7ff1c" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.487067 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rn78b" event={"ID":"2495ae3d-a5dd-4c69-86d8-6081174afdc0","Type":"ContainerDied","Data":"e7fc7d93e345f4c1714fed482ff9eadcafd4d0b0c2bef4511c1c69c973d7ff1c"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.491783 4688 generic.go:334] "Generic (PLEG): container finished" podID="3a0ba682-5263-45cb-afc5-ca4d3f4b1354" containerID="b12644e475efc6dd8f036474e1045822faed67f268bf85acd69cfff1d5cdf6ef" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.491888 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7cdc-account-create-rwhtz" event={"ID":"3a0ba682-5263-45cb-afc5-ca4d3f4b1354","Type":"ContainerDied","Data":"b12644e475efc6dd8f036474e1045822faed67f268bf85acd69cfff1d5cdf6ef"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.494825 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"5f35c134b75b1670343f4570796ce576e577b72f9aa9fd31bacb939af4714952"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.496153 4688 generic.go:334] "Generic (PLEG): container finished" podID="f060668a-f36b-47f8-88eb-f7ecc06a491c" containerID="369380adb515d109980cacf6efc40c3c21d81bc8261b23ee644c8001f845b7b2" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.496199 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ee49-account-create-z8krp" event={"ID":"f060668a-f36b-47f8-88eb-f7ecc06a491c","Type":"ContainerDied","Data":"369380adb515d109980cacf6efc40c3c21d81bc8261b23ee644c8001f845b7b2"} Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.498188 4688 generic.go:334] "Generic (PLEG): container finished" podID="9eb3a057-3576-4c65-a327-c3325780d24a" containerID="92f2f7d6b9f2d384530e30587ac26eb7f433bed79b9a1dcc7d3810fdf2c4d1eb" exitCode=0 Nov 25 12:31:35 crc kubenswrapper[4688]: I1125 12:31:35.498217 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bz5zq" event={"ID":"9eb3a057-3576-4c65-a327-c3325780d24a","Type":"ContainerDied","Data":"92f2f7d6b9f2d384530e30587ac26eb7f433bed79b9a1dcc7d3810fdf2c4d1eb"} Nov 25 12:31:36 crc kubenswrapper[4688]: I1125 12:31:36.511368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"bd70291e472336b8547933d7b2485e1c9f73c50298309795580d5992a9088290"} Nov 25 12:31:36 crc kubenswrapper[4688]: I1125 12:31:36.511764 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"8b29042bbbf5adfb2fc150c708723b91ae9383e97f4826132c29831afe9d99fe"} Nov 25 12:31:36 crc kubenswrapper[4688]: I1125 12:31:36.515729 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rj4gb" event={"ID":"79d582d3-e8f8-49a4-a48c-9c07f6083db5","Type":"ContainerStarted","Data":"7e3445c248401cd21f1a77c6bc99c1f59a5e099500e4cfde7d123456e9ca859e"} Nov 25 12:31:37 crc kubenswrapper[4688]: I1125 12:31:37.976991 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-92hwc" Nov 25 12:31:38 crc kubenswrapper[4688]: I1125 12:31:38.002820 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rj4gb" podStartSLOduration=5.532577324 podStartE2EDuration="20.002799363s" podCreationTimestamp="2025-11-25 12:31:18 +0000 UTC" firstStartedPulling="2025-11-25 12:31:19.356227418 +0000 UTC m=+1029.465856276" lastFinishedPulling="2025-11-25 12:31:33.826449447 +0000 UTC m=+1043.936078315" observedRunningTime="2025-11-25 12:31:36.535629579 +0000 UTC m=+1046.645258467" watchObservedRunningTime="2025-11-25 12:31:38.002799363 +0000 UTC m=+1048.112428231" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.548312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rn78b" event={"ID":"2495ae3d-a5dd-4c69-86d8-6081174afdc0","Type":"ContainerDied","Data":"1f3c6cc0ac036d43dba0941ba72b325033b797a7921eb00abe66accfa7eebac8"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.548790 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3c6cc0ac036d43dba0941ba72b325033b797a7921eb00abe66accfa7eebac8" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.553207 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j2kst" event={"ID":"01cff46f-6371-4d95-b13d-e9b7a337c230","Type":"ContainerDied","Data":"2346c50e8bf7732ba8667658464a5daa2ded48fdb67046d7b4eba77f4517ed82"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.553489 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2346c50e8bf7732ba8667658464a5daa2ded48fdb67046d7b4eba77f4517ed82" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.555749 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cab6-account-create-v69wv" event={"ID":"38e9927b-64ba-43f5-a5e7-2061e1288e8d","Type":"ContainerDied","Data":"1deaf06e1c691992166c9d06a393c94c75856cfd1af9fee80fc8b4b87bf101ff"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.556008 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1deaf06e1c691992166c9d06a393c94c75856cfd1af9fee80fc8b4b87bf101ff" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.557849 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4668-account-create-vvbfr" event={"ID":"0f5088e7-042f-48f0-97d3-d97e820ac314","Type":"ContainerDied","Data":"cc96c6a81ee3658112aebbc6e0b870b39fcf612f91bdc4de0c3b7ac37a0e17e8"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.558056 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc96c6a81ee3658112aebbc6e0b870b39fcf612f91bdc4de0c3b7ac37a0e17e8" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.564243 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ee49-account-create-z8krp" event={"ID":"f060668a-f36b-47f8-88eb-f7ecc06a491c","Type":"ContainerDied","Data":"2a7670d00675ff3a9f9a858411de7f5215404a9d552614545e1da777bc05d308"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.564320 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7670d00675ff3a9f9a858411de7f5215404a9d552614545e1da777bc05d308" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.567012 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bz5zq" event={"ID":"9eb3a057-3576-4c65-a327-c3325780d24a","Type":"ContainerDied","Data":"ff48dd93d4d28a8ee5f6ef0269008a833f9b8f8dc1f7f070e55ce972d00b911f"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.567268 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff48dd93d4d28a8ee5f6ef0269008a833f9b8f8dc1f7f070e55ce972d00b911f" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.569190 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7cdc-account-create-rwhtz" event={"ID":"3a0ba682-5263-45cb-afc5-ca4d3f4b1354","Type":"ContainerDied","Data":"9b34ba575c3cd423da727a4ffb79e7aab7442be1147f9128162fad0b7bb06220"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.569243 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b34ba575c3cd423da727a4ffb79e7aab7442be1147f9128162fad0b7bb06220" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.572955 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-92hwc-config-thmq5" event={"ID":"b5ee6b67-09c3-4858-b121-c579ab266af7","Type":"ContainerDied","Data":"f972433657ac37da01fff9520ade2e2d42027529538d7a0b3fcd9697a8a8d1ef"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.573353 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f972433657ac37da01fff9520ade2e2d42027529538d7a0b3fcd9697a8a8d1ef" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.575244 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dbv7f" event={"ID":"0d85141d-b092-4062-a640-0192faa87846","Type":"ContainerDied","Data":"66313cc67818af14741e47c379b6ca3efa8ae3459fbe8a765f6814121b511cd8"} Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.575269 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66313cc67818af14741e47c379b6ca3efa8ae3459fbe8a765f6814121b511cd8" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.718022 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.744699 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.752818 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.774790 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.793237 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fpv7\" (UniqueName: \"kubernetes.io/projected/f060668a-f36b-47f8-88eb-f7ecc06a491c-kube-api-access-8fpv7\") pod \"f060668a-f36b-47f8-88eb-f7ecc06a491c\" (UID: \"f060668a-f36b-47f8-88eb-f7ecc06a491c\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.793293 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flt2v\" (UniqueName: \"kubernetes.io/projected/01cff46f-6371-4d95-b13d-e9b7a337c230-kube-api-access-flt2v\") pod \"01cff46f-6371-4d95-b13d-e9b7a337c230\" (UID: \"01cff46f-6371-4d95-b13d-e9b7a337c230\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.793435 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44qc6\" (UniqueName: \"kubernetes.io/projected/38e9927b-64ba-43f5-a5e7-2061e1288e8d-kube-api-access-44qc6\") pod \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\" (UID: \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.793489 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01cff46f-6371-4d95-b13d-e9b7a337c230-operator-scripts\") pod \"01cff46f-6371-4d95-b13d-e9b7a337c230\" (UID: \"01cff46f-6371-4d95-b13d-e9b7a337c230\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.793551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f060668a-f36b-47f8-88eb-f7ecc06a491c-operator-scripts\") pod \"f060668a-f36b-47f8-88eb-f7ecc06a491c\" (UID: \"f060668a-f36b-47f8-88eb-f7ecc06a491c\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.793627 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38e9927b-64ba-43f5-a5e7-2061e1288e8d-operator-scripts\") pod \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\" (UID: \"38e9927b-64ba-43f5-a5e7-2061e1288e8d\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.794645 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01cff46f-6371-4d95-b13d-e9b7a337c230-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01cff46f-6371-4d95-b13d-e9b7a337c230" (UID: "01cff46f-6371-4d95-b13d-e9b7a337c230"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.794718 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f060668a-f36b-47f8-88eb-f7ecc06a491c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f060668a-f36b-47f8-88eb-f7ecc06a491c" (UID: "f060668a-f36b-47f8-88eb-f7ecc06a491c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.794713 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e9927b-64ba-43f5-a5e7-2061e1288e8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38e9927b-64ba-43f5-a5e7-2061e1288e8d" (UID: "38e9927b-64ba-43f5-a5e7-2061e1288e8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.795291 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38e9927b-64ba-43f5-a5e7-2061e1288e8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.795319 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01cff46f-6371-4d95-b13d-e9b7a337c230-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.795329 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f060668a-f36b-47f8-88eb-f7ecc06a491c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.797743 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e9927b-64ba-43f5-a5e7-2061e1288e8d-kube-api-access-44qc6" (OuterVolumeSpecName: "kube-api-access-44qc6") pod "38e9927b-64ba-43f5-a5e7-2061e1288e8d" (UID: "38e9927b-64ba-43f5-a5e7-2061e1288e8d"). InnerVolumeSpecName "kube-api-access-44qc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.799728 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.802573 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cff46f-6371-4d95-b13d-e9b7a337c230-kube-api-access-flt2v" (OuterVolumeSpecName: "kube-api-access-flt2v") pod "01cff46f-6371-4d95-b13d-e9b7a337c230" (UID: "01cff46f-6371-4d95-b13d-e9b7a337c230"). InnerVolumeSpecName "kube-api-access-flt2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.804302 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f060668a-f36b-47f8-88eb-f7ecc06a491c-kube-api-access-8fpv7" (OuterVolumeSpecName: "kube-api-access-8fpv7") pod "f060668a-f36b-47f8-88eb-f7ecc06a491c" (UID: "f060668a-f36b-47f8-88eb-f7ecc06a491c"). InnerVolumeSpecName "kube-api-access-8fpv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.810448 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.841442 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.859712 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.875417 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.895810 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-operator-scripts\") pod \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\" (UID: \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.895844 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run-ovn\") pod \"b5ee6b67-09c3-4858-b121-c579ab266af7\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.895872 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q46m7\" (UniqueName: \"kubernetes.io/projected/b5ee6b67-09c3-4858-b121-c579ab266af7-kube-api-access-q46m7\") pod \"b5ee6b67-09c3-4858-b121-c579ab266af7\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.895899 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d85141d-b092-4062-a640-0192faa87846-operator-scripts\") pod \"0d85141d-b092-4062-a640-0192faa87846\" (UID: \"0d85141d-b092-4062-a640-0192faa87846\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.895924 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg9tc\" (UniqueName: \"kubernetes.io/projected/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-kube-api-access-fg9tc\") pod \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\" (UID: \"3a0ba682-5263-45cb-afc5-ca4d3f4b1354\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.895948 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6955\" (UniqueName: \"kubernetes.io/projected/2495ae3d-a5dd-4c69-86d8-6081174afdc0-kube-api-access-h6955\") pod \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\" (UID: \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.895978 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-log-ovn\") pod \"b5ee6b67-09c3-4858-b121-c579ab266af7\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896022 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-additional-scripts\") pod \"b5ee6b67-09c3-4858-b121-c579ab266af7\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896063 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsgs6\" (UniqueName: \"kubernetes.io/projected/0f5088e7-042f-48f0-97d3-d97e820ac314-kube-api-access-jsgs6\") pod \"0f5088e7-042f-48f0-97d3-d97e820ac314\" (UID: \"0f5088e7-042f-48f0-97d3-d97e820ac314\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896086 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run\") pod \"b5ee6b67-09c3-4858-b121-c579ab266af7\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896103 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-scripts\") pod \"b5ee6b67-09c3-4858-b121-c579ab266af7\" (UID: \"b5ee6b67-09c3-4858-b121-c579ab266af7\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896169 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhbt9\" (UniqueName: \"kubernetes.io/projected/0d85141d-b092-4062-a640-0192faa87846-kube-api-access-hhbt9\") pod \"0d85141d-b092-4062-a640-0192faa87846\" (UID: \"0d85141d-b092-4062-a640-0192faa87846\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896195 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495ae3d-a5dd-4c69-86d8-6081174afdc0-operator-scripts\") pod \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\" (UID: \"2495ae3d-a5dd-4c69-86d8-6081174afdc0\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896252 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f5088e7-042f-48f0-97d3-d97e820ac314-operator-scripts\") pod \"0f5088e7-042f-48f0-97d3-d97e820ac314\" (UID: \"0f5088e7-042f-48f0-97d3-d97e820ac314\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896576 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44qc6\" (UniqueName: \"kubernetes.io/projected/38e9927b-64ba-43f5-a5e7-2061e1288e8d-kube-api-access-44qc6\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896594 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fpv7\" (UniqueName: \"kubernetes.io/projected/f060668a-f36b-47f8-88eb-f7ecc06a491c-kube-api-access-8fpv7\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896603 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flt2v\" (UniqueName: \"kubernetes.io/projected/01cff46f-6371-4d95-b13d-e9b7a337c230-kube-api-access-flt2v\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.896618 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b5ee6b67-09c3-4858-b121-c579ab266af7" (UID: "b5ee6b67-09c3-4858-b121-c579ab266af7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.897723 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a0ba682-5263-45cb-afc5-ca4d3f4b1354" (UID: "3a0ba682-5263-45cb-afc5-ca4d3f4b1354"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.898976 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d85141d-b092-4062-a640-0192faa87846-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d85141d-b092-4062-a640-0192faa87846" (UID: "0d85141d-b092-4062-a640-0192faa87846"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.899096 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b5ee6b67-09c3-4858-b121-c579ab266af7" (UID: "b5ee6b67-09c3-4858-b121-c579ab266af7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.899351 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5088e7-042f-48f0-97d3-d97e820ac314-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f5088e7-042f-48f0-97d3-d97e820ac314" (UID: "0f5088e7-042f-48f0-97d3-d97e820ac314"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.899459 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run" (OuterVolumeSpecName: "var-run") pod "b5ee6b67-09c3-4858-b121-c579ab266af7" (UID: "b5ee6b67-09c3-4858-b121-c579ab266af7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.900317 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b5ee6b67-09c3-4858-b121-c579ab266af7" (UID: "b5ee6b67-09c3-4858-b121-c579ab266af7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.901307 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-scripts" (OuterVolumeSpecName: "scripts") pod "b5ee6b67-09c3-4858-b121-c579ab266af7" (UID: "b5ee6b67-09c3-4858-b121-c579ab266af7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.906377 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2495ae3d-a5dd-4c69-86d8-6081174afdc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2495ae3d-a5dd-4c69-86d8-6081174afdc0" (UID: "2495ae3d-a5dd-4c69-86d8-6081174afdc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.907081 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ee6b67-09c3-4858-b121-c579ab266af7-kube-api-access-q46m7" (OuterVolumeSpecName: "kube-api-access-q46m7") pod "b5ee6b67-09c3-4858-b121-c579ab266af7" (UID: "b5ee6b67-09c3-4858-b121-c579ab266af7"). InnerVolumeSpecName "kube-api-access-q46m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.908394 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d85141d-b092-4062-a640-0192faa87846-kube-api-access-hhbt9" (OuterVolumeSpecName: "kube-api-access-hhbt9") pod "0d85141d-b092-4062-a640-0192faa87846" (UID: "0d85141d-b092-4062-a640-0192faa87846"). InnerVolumeSpecName "kube-api-access-hhbt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.913840 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-kube-api-access-fg9tc" (OuterVolumeSpecName: "kube-api-access-fg9tc") pod "3a0ba682-5263-45cb-afc5-ca4d3f4b1354" (UID: "3a0ba682-5263-45cb-afc5-ca4d3f4b1354"). InnerVolumeSpecName "kube-api-access-fg9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.914219 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5088e7-042f-48f0-97d3-d97e820ac314-kube-api-access-jsgs6" (OuterVolumeSpecName: "kube-api-access-jsgs6") pod "0f5088e7-042f-48f0-97d3-d97e820ac314" (UID: "0f5088e7-042f-48f0-97d3-d97e820ac314"). InnerVolumeSpecName "kube-api-access-jsgs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.915131 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2495ae3d-a5dd-4c69-86d8-6081174afdc0-kube-api-access-h6955" (OuterVolumeSpecName: "kube-api-access-h6955") pod "2495ae3d-a5dd-4c69-86d8-6081174afdc0" (UID: "2495ae3d-a5dd-4c69-86d8-6081174afdc0"). InnerVolumeSpecName "kube-api-access-h6955". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998008 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl8cs\" (UniqueName: \"kubernetes.io/projected/9eb3a057-3576-4c65-a327-c3325780d24a-kube-api-access-xl8cs\") pod \"9eb3a057-3576-4c65-a327-c3325780d24a\" (UID: \"9eb3a057-3576-4c65-a327-c3325780d24a\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998163 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eb3a057-3576-4c65-a327-c3325780d24a-operator-scripts\") pod \"9eb3a057-3576-4c65-a327-c3325780d24a\" (UID: \"9eb3a057-3576-4c65-a327-c3325780d24a\") " Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998627 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhbt9\" (UniqueName: \"kubernetes.io/projected/0d85141d-b092-4062-a640-0192faa87846-kube-api-access-hhbt9\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998655 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495ae3d-a5dd-4c69-86d8-6081174afdc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998668 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f5088e7-042f-48f0-97d3-d97e820ac314-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998681 4688 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998695 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998707 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q46m7\" (UniqueName: \"kubernetes.io/projected/b5ee6b67-09c3-4858-b121-c579ab266af7-kube-api-access-q46m7\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998718 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d85141d-b092-4062-a640-0192faa87846-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998730 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg9tc\" (UniqueName: \"kubernetes.io/projected/3a0ba682-5263-45cb-afc5-ca4d3f4b1354-kube-api-access-fg9tc\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998741 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6955\" (UniqueName: \"kubernetes.io/projected/2495ae3d-a5dd-4c69-86d8-6081174afdc0-kube-api-access-h6955\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998653 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb3a057-3576-4c65-a327-c3325780d24a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9eb3a057-3576-4c65-a327-c3325780d24a" (UID: "9eb3a057-3576-4c65-a327-c3325780d24a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998751 4688 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998802 4688 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998817 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsgs6\" (UniqueName: \"kubernetes.io/projected/0f5088e7-042f-48f0-97d3-d97e820ac314-kube-api-access-jsgs6\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998833 4688 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b5ee6b67-09c3-4858-b121-c579ab266af7-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:39 crc kubenswrapper[4688]: I1125 12:31:39.998845 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5ee6b67-09c3-4858-b121-c579ab266af7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.000780 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb3a057-3576-4c65-a327-c3325780d24a-kube-api-access-xl8cs" (OuterVolumeSpecName: "kube-api-access-xl8cs") pod "9eb3a057-3576-4c65-a327-c3325780d24a" (UID: "9eb3a057-3576-4c65-a327-c3325780d24a"). InnerVolumeSpecName "kube-api-access-xl8cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.099732 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eb3a057-3576-4c65-a327-c3325780d24a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.099983 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl8cs\" (UniqueName: \"kubernetes.io/projected/9eb3a057-3576-4c65-a327-c3325780d24a-kube-api-access-xl8cs\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.585956 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rqdck" event={"ID":"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0","Type":"ContainerStarted","Data":"232475f4d078877f621b428449899c525e085f7448a466f5001974157db1d00e"} Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.593434 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j2kst" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.600636 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rn78b" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.600714 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bz5zq" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.600799 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"6a62735ffb60f3b28f7adf853eb28675247fa55b7a5a620529a7439f594c3f01"} Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.600837 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"7181ced147956e0cb57ec8558110258b4a18e06c4ad56508d32fba62368937e5"} Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.600877 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4668-account-create-vvbfr" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.600926 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cab6-account-create-v69wv" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.600970 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ee49-account-create-z8krp" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.601696 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dbv7f" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.601931 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-92hwc-config-thmq5" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.601992 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7cdc-account-create-rwhtz" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.628137 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rqdck" podStartSLOduration=6.013163709 podStartE2EDuration="11.628108953s" podCreationTimestamp="2025-11-25 12:31:29 +0000 UTC" firstStartedPulling="2025-11-25 12:31:33.882318817 +0000 UTC m=+1043.991947685" lastFinishedPulling="2025-11-25 12:31:39.497264061 +0000 UTC m=+1049.606892929" observedRunningTime="2025-11-25 12:31:40.601492528 +0000 UTC m=+1050.711121416" watchObservedRunningTime="2025-11-25 12:31:40.628108953 +0000 UTC m=+1050.737737821" Nov 25 12:31:40 crc kubenswrapper[4688]: I1125 12:31:40.989183 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-92hwc-config-thmq5"] Nov 25 12:31:41 crc kubenswrapper[4688]: I1125 12:31:41.002988 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-92hwc-config-thmq5"] Nov 25 12:31:41 crc kubenswrapper[4688]: I1125 12:31:41.618246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"31bdfc62df8a6f1ee856093b9aaad011cfede1ca6de583b0838bf0d8f6928d4c"} Nov 25 12:31:41 crc kubenswrapper[4688]: I1125 12:31:41.618628 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"77144c47f54f5e95e8531a01baa4dfb4f210ec5542f4c319178681d59ffb4f2b"} Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.631767 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"64a9667e7e83824f267047dbb02009f8f537bdaf8eef5df712a58e44959d8ec9"} Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.632070 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"075ff654d85da16f0b4685cfe7aa131cb26380128d2bc0f804e0d42c3d52c2db"} Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.632087 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"e4d07267eabb2d0c389c6904d542ea169b64f37ea39e1d51d1fce70d9cc8704c"} Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.632105 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"bf00a40ef53ada21588d26422937606fab77f372b6013bdb58013333af468dc0"} Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.632115 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"000479f0-0b04-4867-989b-622c2e951f4b","Type":"ContainerStarted","Data":"9f32f1a52599223e26a2a799051cb146acb3ab26043634748b904da9eb8488d1"} Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.685894 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.212770611 podStartE2EDuration="38.685876998s" podCreationTimestamp="2025-11-25 12:31:04 +0000 UTC" firstStartedPulling="2025-11-25 12:31:22.66117609 +0000 UTC m=+1032.770804958" lastFinishedPulling="2025-11-25 12:31:41.134282477 +0000 UTC m=+1051.243911345" observedRunningTime="2025-11-25 12:31:42.671900633 +0000 UTC m=+1052.781529501" watchObservedRunningTime="2025-11-25 12:31:42.685876998 +0000 UTC m=+1052.795505866" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.750628 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ee6b67-09c3-4858-b121-c579ab266af7" path="/var/lib/kubelet/pods/b5ee6b67-09c3-4858-b121-c579ab266af7/volumes" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953137 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-f97r9"] Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953613 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ee6b67-09c3-4858-b121-c579ab266af7" containerName="ovn-config" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953635 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ee6b67-09c3-4858-b121-c579ab266af7" containerName="ovn-config" Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953648 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0ba682-5263-45cb-afc5-ca4d3f4b1354" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953659 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0ba682-5263-45cb-afc5-ca4d3f4b1354" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953671 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2495ae3d-a5dd-4c69-86d8-6081174afdc0" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953678 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2495ae3d-a5dd-4c69-86d8-6081174afdc0" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953696 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d85141d-b092-4062-a640-0192faa87846" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953703 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d85141d-b092-4062-a640-0192faa87846" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953717 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e9927b-64ba-43f5-a5e7-2061e1288e8d" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953723 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e9927b-64ba-43f5-a5e7-2061e1288e8d" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953751 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb3a057-3576-4c65-a327-c3325780d24a" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953758 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb3a057-3576-4c65-a327-c3325780d24a" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953781 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5088e7-042f-48f0-97d3-d97e820ac314" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953788 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5088e7-042f-48f0-97d3-d97e820ac314" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953799 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cff46f-6371-4d95-b13d-e9b7a337c230" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953805 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cff46f-6371-4d95-b13d-e9b7a337c230" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: E1125 12:31:42.953813 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f060668a-f36b-47f8-88eb-f7ecc06a491c" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953819 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f060668a-f36b-47f8-88eb-f7ecc06a491c" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.953980 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb3a057-3576-4c65-a327-c3325780d24a" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954007 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="01cff46f-6371-4d95-b13d-e9b7a337c230" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954025 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2495ae3d-a5dd-4c69-86d8-6081174afdc0" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954036 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d85141d-b092-4062-a640-0192faa87846" containerName="mariadb-database-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954048 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f060668a-f36b-47f8-88eb-f7ecc06a491c" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954072 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0ba682-5263-45cb-afc5-ca4d3f4b1354" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954081 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5088e7-042f-48f0-97d3-d97e820ac314" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954090 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ee6b67-09c3-4858-b121-c579ab266af7" containerName="ovn-config" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954100 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e9927b-64ba-43f5-a5e7-2061e1288e8d" containerName="mariadb-account-create" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.954903 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.962709 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 12:31:42 crc kubenswrapper[4688]: I1125 12:31:42.975211 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-f97r9"] Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.070306 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.070355 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djk62\" (UniqueName: \"kubernetes.io/projected/c71f5edc-39d9-4386-85e5-e304ee06f318-kube-api-access-djk62\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.070384 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.070403 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-config\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.070568 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.070665 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-svc\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.171891 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.171940 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djk62\" (UniqueName: \"kubernetes.io/projected/c71f5edc-39d9-4386-85e5-e304ee06f318-kube-api-access-djk62\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.171969 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.171985 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-config\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.172019 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.172069 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-svc\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.173210 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-config\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.173303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-svc\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.179338 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.179399 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.179860 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.191852 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djk62\" (UniqueName: \"kubernetes.io/projected/c71f5edc-39d9-4386-85e5-e304ee06f318-kube-api-access-djk62\") pod \"dnsmasq-dns-764c5664d7-f97r9\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.309759 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:43 crc kubenswrapper[4688]: I1125 12:31:43.785842 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-f97r9"] Nov 25 12:31:44 crc kubenswrapper[4688]: I1125 12:31:44.649409 4688 generic.go:334] "Generic (PLEG): container finished" podID="c71f5edc-39d9-4386-85e5-e304ee06f318" containerID="0a663f93779b74729816102cefaea90e0d93c5e415bb98ee1fa840f41257187f" exitCode=0 Nov 25 12:31:44 crc kubenswrapper[4688]: I1125 12:31:44.649582 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" event={"ID":"c71f5edc-39d9-4386-85e5-e304ee06f318","Type":"ContainerDied","Data":"0a663f93779b74729816102cefaea90e0d93c5e415bb98ee1fa840f41257187f"} Nov 25 12:31:44 crc kubenswrapper[4688]: I1125 12:31:44.649751 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" event={"ID":"c71f5edc-39d9-4386-85e5-e304ee06f318","Type":"ContainerStarted","Data":"35a2e9cb536abd11b0da81f37db16520a9207a6815ce43627541a21a75ba5289"} Nov 25 12:31:45 crc kubenswrapper[4688]: I1125 12:31:45.668792 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" event={"ID":"c71f5edc-39d9-4386-85e5-e304ee06f318","Type":"ContainerStarted","Data":"d7cbe2f712401aa13df6512e3d3fd1d7bee880dfe01ed24f2f6e9df6d0ba661a"} Nov 25 12:31:45 crc kubenswrapper[4688]: I1125 12:31:45.672205 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:45 crc kubenswrapper[4688]: I1125 12:31:45.690656 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" podStartSLOduration=3.690634919 podStartE2EDuration="3.690634919s" podCreationTimestamp="2025-11-25 12:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:31:45.687130325 +0000 UTC m=+1055.796759193" watchObservedRunningTime="2025-11-25 12:31:45.690634919 +0000 UTC m=+1055.800263787" Nov 25 12:31:47 crc kubenswrapper[4688]: I1125 12:31:47.692026 4688 generic.go:334] "Generic (PLEG): container finished" podID="ab5ac079-6de6-4cc5-aad0-86bf3b34feb0" containerID="232475f4d078877f621b428449899c525e085f7448a466f5001974157db1d00e" exitCode=0 Nov 25 12:31:47 crc kubenswrapper[4688]: I1125 12:31:47.692170 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rqdck" event={"ID":"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0","Type":"ContainerDied","Data":"232475f4d078877f621b428449899c525e085f7448a466f5001974157db1d00e"} Nov 25 12:31:48 crc kubenswrapper[4688]: I1125 12:31:48.702644 4688 generic.go:334] "Generic (PLEG): container finished" podID="79d582d3-e8f8-49a4-a48c-9c07f6083db5" containerID="7e3445c248401cd21f1a77c6bc99c1f59a5e099500e4cfde7d123456e9ca859e" exitCode=0 Nov 25 12:31:48 crc kubenswrapper[4688]: I1125 12:31:48.702828 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rj4gb" event={"ID":"79d582d3-e8f8-49a4-a48c-9c07f6083db5","Type":"ContainerDied","Data":"7e3445c248401cd21f1a77c6bc99c1f59a5e099500e4cfde7d123456e9ca859e"} Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.003199 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.091991 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-config-data\") pod \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.092058 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k2rw\" (UniqueName: \"kubernetes.io/projected/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-kube-api-access-4k2rw\") pod \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.092092 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-combined-ca-bundle\") pod \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\" (UID: \"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0\") " Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.097791 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-kube-api-access-4k2rw" (OuterVolumeSpecName: "kube-api-access-4k2rw") pod "ab5ac079-6de6-4cc5-aad0-86bf3b34feb0" (UID: "ab5ac079-6de6-4cc5-aad0-86bf3b34feb0"). InnerVolumeSpecName "kube-api-access-4k2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.117615 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab5ac079-6de6-4cc5-aad0-86bf3b34feb0" (UID: "ab5ac079-6de6-4cc5-aad0-86bf3b34feb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.138746 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-config-data" (OuterVolumeSpecName: "config-data") pod "ab5ac079-6de6-4cc5-aad0-86bf3b34feb0" (UID: "ab5ac079-6de6-4cc5-aad0-86bf3b34feb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.193878 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.193920 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k2rw\" (UniqueName: \"kubernetes.io/projected/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-kube-api-access-4k2rw\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.193935 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.711370 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rqdck" event={"ID":"ab5ac079-6de6-4cc5-aad0-86bf3b34feb0","Type":"ContainerDied","Data":"b3fc0cbfc5c463b3912540d17616396c5c2e1a18d594ac237ad137bcb9c4b813"} Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.711413 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rqdck" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.711424 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3fc0cbfc5c463b3912540d17616396c5c2e1a18d594ac237ad137bcb9c4b813" Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.965922 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-f97r9"] Nov 25 12:31:49 crc kubenswrapper[4688]: I1125 12:31:49.968050 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" podUID="c71f5edc-39d9-4386-85e5-e304ee06f318" containerName="dnsmasq-dns" containerID="cri-o://d7cbe2f712401aa13df6512e3d3fd1d7bee880dfe01ed24f2f6e9df6d0ba661a" gracePeriod=10 Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.011086 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8kpvb"] Nov 25 12:31:50 crc kubenswrapper[4688]: E1125 12:31:50.011490 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5ac079-6de6-4cc5-aad0-86bf3b34feb0" containerName="keystone-db-sync" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.011507 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5ac079-6de6-4cc5-aad0-86bf3b34feb0" containerName="keystone-db-sync" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.011705 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5ac079-6de6-4cc5-aad0-86bf3b34feb0" containerName="keystone-db-sync" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.012337 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.016438 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.016739 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.016808 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.016743 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.016923 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f7h8n" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.022992 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-2xg2m"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.024669 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.038191 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8kpvb"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.089628 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-2xg2m"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.134866 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df2hr\" (UniqueName: \"kubernetes.io/projected/8659d06d-b50f-4258-9eb8-44b1de249e34-kube-api-access-df2hr\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.134911 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-combined-ca-bundle\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.134938 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-fernet-keys\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135034 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-config-data\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135063 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-config\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135180 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-scripts\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135212 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-svc\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135296 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vpv\" (UniqueName: \"kubernetes.io/projected/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-kube-api-access-l8vpv\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135337 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-credential-keys\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135402 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.135427 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.160450 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-tgslp"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.161989 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.182583 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-tgslp"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.186886 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gn9hd" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.198573 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238609 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238650 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238711 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df2hr\" (UniqueName: \"kubernetes.io/projected/8659d06d-b50f-4258-9eb8-44b1de249e34-kube-api-access-df2hr\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238730 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-combined-ca-bundle\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238746 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-fernet-keys\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238778 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-config-data\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238793 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-config\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238815 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-scripts\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238856 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-svc\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238888 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vpv\" (UniqueName: \"kubernetes.io/projected/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-kube-api-access-l8vpv\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.238911 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-credential-keys\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.240791 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.242440 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-svc\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.243043 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.243086 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.254765 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-config-data\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.255836 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-config\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.266128 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-scripts\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.267476 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-credential-keys\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.267645 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-combined-ca-bundle\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.274019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-fernet-keys\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.276287 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vpv\" (UniqueName: \"kubernetes.io/projected/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-kube-api-access-l8vpv\") pod \"dnsmasq-dns-5959f8865f-2xg2m\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.300419 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df2hr\" (UniqueName: \"kubernetes.io/projected/8659d06d-b50f-4258-9eb8-44b1de249e34-kube-api-access-df2hr\") pod \"keystone-bootstrap-8kpvb\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.340215 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-combined-ca-bundle\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.340300 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-config-data\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.340347 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwq2\" (UniqueName: \"kubernetes.io/projected/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-kube-api-access-9nwq2\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.371828 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.376927 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.447594 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nwq2\" (UniqueName: \"kubernetes.io/projected/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-kube-api-access-9nwq2\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.447709 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-combined-ca-bundle\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.447790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-config-data\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.458487 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-config-data\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.468175 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.469425 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-combined-ca-bundle\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.502384 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:31:50 crc kubenswrapper[4688]: E1125 12:31:50.502904 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d582d3-e8f8-49a4-a48c-9c07f6083db5" containerName="glance-db-sync" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.502922 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d582d3-e8f8-49a4-a48c-9c07f6083db5" containerName="glance-db-sync" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.503733 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d582d3-e8f8-49a4-a48c-9c07f6083db5" containerName="glance-db-sync" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.518971 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.525920 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.526143 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.549358 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-db-sync-config-data\") pod \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.549545 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-combined-ca-bundle\") pod \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.549648 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-config-data\") pod \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.549692 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tpjz\" (UniqueName: \"kubernetes.io/projected/79d582d3-e8f8-49a4-a48c-9c07f6083db5-kube-api-access-7tpjz\") pod \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\" (UID: \"79d582d3-e8f8-49a4-a48c-9c07f6083db5\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.567699 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "79d582d3-e8f8-49a4-a48c-9c07f6083db5" (UID: "79d582d3-e8f8-49a4-a48c-9c07f6083db5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.568148 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nwq2\" (UniqueName: \"kubernetes.io/projected/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-kube-api-access-9nwq2\") pod \"heat-db-sync-tgslp\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.569933 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d582d3-e8f8-49a4-a48c-9c07f6083db5-kube-api-access-7tpjz" (OuterVolumeSpecName: "kube-api-access-7tpjz") pod "79d582d3-e8f8-49a4-a48c-9c07f6083db5" (UID: "79d582d3-e8f8-49a4-a48c-9c07f6083db5"). InnerVolumeSpecName "kube-api-access-7tpjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.586685 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-2xg2m"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.602649 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-slb7z"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.603837 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.613108 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.619596 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.619677 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.639463 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-wctjb"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.640841 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.643466 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kz9g5" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.645706 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.646164 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-86cvc" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.646270 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.652550 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-log-httpd\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.652619 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.652639 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-scripts\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.657685 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dh6m\" (UniqueName: \"kubernetes.io/projected/2611fb13-90fc-4310-a8dc-c224f4689a9f-kube-api-access-5dh6m\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.657743 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-config-data\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.657856 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.657929 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-run-httpd\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.658016 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.658028 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tpjz\" (UniqueName: \"kubernetes.io/projected/79d582d3-e8f8-49a4-a48c-9c07f6083db5-kube-api-access-7tpjz\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.663792 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-config-data" (OuterVolumeSpecName: "config-data") pod "79d582d3-e8f8-49a4-a48c-9c07f6083db5" (UID: "79d582d3-e8f8-49a4-a48c-9c07f6083db5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.670117 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rzvvs"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.671823 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.674389 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gbxsr" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.674788 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.674954 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.682101 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d582d3-e8f8-49a4-a48c-9c07f6083db5" (UID: "79d582d3-e8f8-49a4-a48c-9c07f6083db5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.682156 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-slb7z"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.696307 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wctjb"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.702371 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rzvvs"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.708734 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-td7pk"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.710360 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.715699 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-crb7s"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.732356 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.745937 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.746275 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gx949" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766189 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-scripts\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766229 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-combined-ca-bundle\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766364 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dh6m\" (UniqueName: \"kubernetes.io/projected/2611fb13-90fc-4310-a8dc-c224f4689a9f-kube-api-access-5dh6m\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766385 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-config-data\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-config-data\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766554 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-combined-ca-bundle\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766590 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-db-sync-config-data\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-combined-ca-bundle\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766752 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748zq\" (UniqueName: \"kubernetes.io/projected/4297ef88-82df-476f-90f6-e87b26dae1fd-kube-api-access-748zq\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.766899 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-scripts\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.767022 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/588d841f-905c-42bb-9242-2e86b7e66877-etc-machine-id\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.767057 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.767802 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbcdb\" (UniqueName: \"kubernetes.io/projected/5b031476-5e95-46a9-8774-4073f647cb7a-kube-api-access-lbcdb\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.767850 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbwn\" (UniqueName: \"kubernetes.io/projected/588d841f-905c-42bb-9242-2e86b7e66877-kube-api-access-jbbwn\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.767948 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-run-httpd\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.767996 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-config\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.768045 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-scripts\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.768068 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-config-data\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.768089 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b031476-5e95-46a9-8774-4073f647cb7a-logs\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.768126 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-log-httpd\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.772909 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.772986 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d582d3-e8f8-49a4-a48c-9c07f6083db5-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.773388 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-run-httpd\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.774795 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.779059 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.779309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-log-httpd\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.783609 4688 generic.go:334] "Generic (PLEG): container finished" podID="c71f5edc-39d9-4386-85e5-e304ee06f318" containerID="d7cbe2f712401aa13df6512e3d3fd1d7bee880dfe01ed24f2f6e9df6d0ba661a" exitCode=0 Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.785284 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-scripts\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.789596 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-config-data\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.789923 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rj4gb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.790209 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-td7pk"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.790241 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" event={"ID":"c71f5edc-39d9-4386-85e5-e304ee06f318","Type":"ContainerDied","Data":"d7cbe2f712401aa13df6512e3d3fd1d7bee880dfe01ed24f2f6e9df6d0ba661a"} Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.790262 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" event={"ID":"c71f5edc-39d9-4386-85e5-e304ee06f318","Type":"ContainerDied","Data":"35a2e9cb536abd11b0da81f37db16520a9207a6815ce43627541a21a75ba5289"} Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.790277 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35a2e9cb536abd11b0da81f37db16520a9207a6815ce43627541a21a75ba5289" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.790285 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rj4gb" event={"ID":"79d582d3-e8f8-49a4-a48c-9c07f6083db5","Type":"ContainerDied","Data":"82b765e94634ea582a1bd3801742ddeb201aa86e5f7ee5b56a288fe6b17a87e9"} Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.790293 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b765e94634ea582a1bd3801742ddeb201aa86e5f7ee5b56a288fe6b17a87e9" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.790302 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-crb7s"] Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.794360 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dh6m\" (UniqueName: \"kubernetes.io/projected/2611fb13-90fc-4310-a8dc-c224f4689a9f-kube-api-access-5dh6m\") pod \"ceilometer-0\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.800354 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.806566 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tgslp" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.843448 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.877452 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.877511 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.877837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-combined-ca-bundle\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.877901 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-config-data\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.877935 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-db-sync-config-data\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.877964 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-combined-ca-bundle\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.877989 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-combined-ca-bundle\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878019 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-config\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878050 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-db-sync-config-data\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878078 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-combined-ca-bundle\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878106 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878144 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748zq\" (UniqueName: \"kubernetes.io/projected/4297ef88-82df-476f-90f6-e87b26dae1fd-kube-api-access-748zq\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878170 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48fm\" (UniqueName: \"kubernetes.io/projected/b8b45337-3ba9-48f5-850a-bc66e7178054-kube-api-access-m48fm\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878209 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-scripts\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878241 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/588d841f-905c-42bb-9242-2e86b7e66877-etc-machine-id\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878288 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbcdb\" (UniqueName: \"kubernetes.io/projected/5b031476-5e95-46a9-8774-4073f647cb7a-kube-api-access-lbcdb\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878317 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbwn\" (UniqueName: \"kubernetes.io/projected/588d841f-905c-42bb-9242-2e86b7e66877-kube-api-access-jbbwn\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878364 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-config\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878397 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-scripts\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878417 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-config-data\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878442 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b031476-5e95-46a9-8774-4073f647cb7a-logs\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878479 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.878539 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhq7\" (UniqueName: \"kubernetes.io/projected/05379dbe-faf8-4ac1-a032-40f31cb4e457-kube-api-access-lxhq7\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.884443 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-scripts\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.884586 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/588d841f-905c-42bb-9242-2e86b7e66877-etc-machine-id\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.893111 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b031476-5e95-46a9-8774-4073f647cb7a-logs\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.893443 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-combined-ca-bundle\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.898067 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-db-sync-config-data\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.900932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-config\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.902269 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-scripts\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.904169 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbcdb\" (UniqueName: \"kubernetes.io/projected/5b031476-5e95-46a9-8774-4073f647cb7a-kube-api-access-lbcdb\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.905683 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-config-data\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.908936 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-config-data\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.910036 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-combined-ca-bundle\") pod \"placement-db-sync-rzvvs\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.910796 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-combined-ca-bundle\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.923347 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbwn\" (UniqueName: \"kubernetes.io/projected/588d841f-905c-42bb-9242-2e86b7e66877-kube-api-access-jbbwn\") pod \"cinder-db-sync-slb7z\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.937071 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748zq\" (UniqueName: \"kubernetes.io/projected/4297ef88-82df-476f-90f6-e87b26dae1fd-kube-api-access-748zq\") pod \"neutron-db-sync-wctjb\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.979642 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-swift-storage-0\") pod \"c71f5edc-39d9-4386-85e5-e304ee06f318\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.979714 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-nb\") pod \"c71f5edc-39d9-4386-85e5-e304ee06f318\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.979821 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-svc\") pod \"c71f5edc-39d9-4386-85e5-e304ee06f318\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.979856 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-config\") pod \"c71f5edc-39d9-4386-85e5-e304ee06f318\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.979904 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-sb\") pod \"c71f5edc-39d9-4386-85e5-e304ee06f318\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.979966 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djk62\" (UniqueName: \"kubernetes.io/projected/c71f5edc-39d9-4386-85e5-e304ee06f318-kube-api-access-djk62\") pod \"c71f5edc-39d9-4386-85e5-e304ee06f318\" (UID: \"c71f5edc-39d9-4386-85e5-e304ee06f318\") " Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980349 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980392 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhq7\" (UniqueName: \"kubernetes.io/projected/05379dbe-faf8-4ac1-a032-40f31cb4e457-kube-api-access-lxhq7\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980462 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980488 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980558 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-db-sync-config-data\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980577 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-combined-ca-bundle\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980597 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-config\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980626 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.980653 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48fm\" (UniqueName: \"kubernetes.io/projected/b8b45337-3ba9-48f5-850a-bc66e7178054-kube-api-access-m48fm\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.981851 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.981949 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.983671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-config\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.986222 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.989241 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-slb7z" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.989963 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.990450 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71f5edc-39d9-4386-85e5-e304ee06f318-kube-api-access-djk62" (OuterVolumeSpecName: "kube-api-access-djk62") pod "c71f5edc-39d9-4386-85e5-e304ee06f318" (UID: "c71f5edc-39d9-4386-85e5-e304ee06f318"). InnerVolumeSpecName "kube-api-access-djk62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:50 crc kubenswrapper[4688]: I1125 12:31:50.995337 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-db-sync-config-data\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.009008 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhq7\" (UniqueName: \"kubernetes.io/projected/05379dbe-faf8-4ac1-a032-40f31cb4e457-kube-api-access-lxhq7\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.009650 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-combined-ca-bundle\") pod \"barbican-db-sync-crb7s\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.021728 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48fm\" (UniqueName: \"kubernetes.io/projected/b8b45337-3ba9-48f5-850a-bc66e7178054-kube-api-access-m48fm\") pod \"dnsmasq-dns-58dd9ff6bc-td7pk\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.046817 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wctjb" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.084829 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djk62\" (UniqueName: \"kubernetes.io/projected/c71f5edc-39d9-4386-85e5-e304ee06f318-kube-api-access-djk62\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.084972 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rzvvs" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.110647 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c71f5edc-39d9-4386-85e5-e304ee06f318" (UID: "c71f5edc-39d9-4386-85e5-e304ee06f318"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.118813 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8kpvb"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.139920 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.154134 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c71f5edc-39d9-4386-85e5-e304ee06f318" (UID: "c71f5edc-39d9-4386-85e5-e304ee06f318"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.178654 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-crb7s" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.188588 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.188617 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.192629 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-td7pk"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.197943 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c71f5edc-39d9-4386-85e5-e304ee06f318" (UID: "c71f5edc-39d9-4386-85e5-e304ee06f318"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.213015 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-config" (OuterVolumeSpecName: "config") pod "c71f5edc-39d9-4386-85e5-e304ee06f318" (UID: "c71f5edc-39d9-4386-85e5-e304ee06f318"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.217945 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jqj9f"] Nov 25 12:31:51 crc kubenswrapper[4688]: E1125 12:31:51.218333 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71f5edc-39d9-4386-85e5-e304ee06f318" containerName="init" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.218345 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71f5edc-39d9-4386-85e5-e304ee06f318" containerName="init" Nov 25 12:31:51 crc kubenswrapper[4688]: E1125 12:31:51.218368 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71f5edc-39d9-4386-85e5-e304ee06f318" containerName="dnsmasq-dns" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.218374 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71f5edc-39d9-4386-85e5-e304ee06f318" containerName="dnsmasq-dns" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.218514 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71f5edc-39d9-4386-85e5-e304ee06f318" containerName="dnsmasq-dns" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.219720 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.227987 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c71f5edc-39d9-4386-85e5-e304ee06f318" (UID: "c71f5edc-39d9-4386-85e5-e304ee06f318"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.257582 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jqj9f"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.292503 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-config\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.292612 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt959\" (UniqueName: \"kubernetes.io/projected/4e944f85-67c1-480a-8656-e34aba801d33-kube-api-access-lt959\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.292633 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.292650 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.293084 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.293649 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.293719 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.293733 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.293741 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c71f5edc-39d9-4386-85e5-e304ee06f318-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.315395 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-2xg2m"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.394794 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.395197 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-config\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.395273 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt959\" (UniqueName: \"kubernetes.io/projected/4e944f85-67c1-480a-8656-e34aba801d33-kube-api-access-lt959\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.395299 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.395318 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.395367 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.396201 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.396245 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.396743 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-config\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.397088 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.397243 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.417039 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt959\" (UniqueName: \"kubernetes.io/projected/4e944f85-67c1-480a-8656-e34aba801d33-kube-api-access-lt959\") pod \"dnsmasq-dns-785d8bcb8c-jqj9f\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.453816 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.508802 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-tgslp"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.635701 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wctjb"] Nov 25 12:31:51 crc kubenswrapper[4688]: W1125 12:31:51.650424 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4297ef88_82df_476f_90f6_e87b26dae1fd.slice/crio-41e6d838dd00949bfab6b889efc9c62d42dd6abf510ccec3112981ca08a636cc WatchSource:0}: Error finding container 41e6d838dd00949bfab6b889efc9c62d42dd6abf510ccec3112981ca08a636cc: Status 404 returned error can't find the container with id 41e6d838dd00949bfab6b889efc9c62d42dd6abf510ccec3112981ca08a636cc Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.685516 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.759443 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-slb7z"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.796730 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-slb7z" event={"ID":"588d841f-905c-42bb-9242-2e86b7e66877","Type":"ContainerStarted","Data":"47c4ce26c1183d67481f0b0b0117b5afbaca498b91b1beb03bcca855261c6342"} Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.798893 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wctjb" event={"ID":"4297ef88-82df-476f-90f6-e87b26dae1fd","Type":"ContainerStarted","Data":"41e6d838dd00949bfab6b889efc9c62d42dd6abf510ccec3112981ca08a636cc"} Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.801449 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8kpvb" event={"ID":"8659d06d-b50f-4258-9eb8-44b1de249e34","Type":"ContainerStarted","Data":"15dde4a08959bdb5b427fc73d72271c4fcea718022d5d77eb385027992e6c913"} Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.802921 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tgslp" event={"ID":"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7","Type":"ContainerStarted","Data":"a51d1234c4126a92d1edb92d8dfba869fd68c159863e8de5bbe429f93f6df595"} Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.821266 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2611fb13-90fc-4310-a8dc-c224f4689a9f","Type":"ContainerStarted","Data":"0879ffb07d91d2cd05add6977f5f1cab6b672ac8fbe0772d79c85db85e4d5fe4"} Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.829315 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-f97r9" Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.832843 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" event={"ID":"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74","Type":"ContainerStarted","Data":"d5b33b774ed98e355b6ce05ee32e8e2f4fdd52f966e763566a5894ca0cd22f1f"} Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.888729 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-f97r9"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.897430 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-f97r9"] Nov 25 12:31:51 crc kubenswrapper[4688]: I1125 12:31:51.979373 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rzvvs"] Nov 25 12:31:52 crc kubenswrapper[4688]: W1125 12:31:52.000232 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b031476_5e95_46a9_8774_4073f647cb7a.slice/crio-0ad0e260ab3ea214e048cb6382b5c44878c7744e549c8ba3ef8f8c9d65af8d0d WatchSource:0}: Error finding container 0ad0e260ab3ea214e048cb6382b5c44878c7744e549c8ba3ef8f8c9d65af8d0d: Status 404 returned error can't find the container with id 0ad0e260ab3ea214e048cb6382b5c44878c7744e549c8ba3ef8f8c9d65af8d0d Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.005869 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-crb7s"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.021756 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-td7pk"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.042272 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.044123 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.057407 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.057703 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.066853 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cc2v5" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.078059 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.110079 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jqj9f"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.216722 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.216785 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-scripts\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.216828 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-config-data\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.216851 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-logs\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.216894 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5p7s\" (UniqueName: \"kubernetes.io/projected/008e1385-f07a-48d2-b940-9cdd96ab97dd-kube-api-access-b5p7s\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.216932 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.216975 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.320281 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.320791 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-scripts\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.320837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-config-data\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.320862 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-logs\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.320913 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5p7s\" (UniqueName: \"kubernetes.io/projected/008e1385-f07a-48d2-b940-9cdd96ab97dd-kube-api-access-b5p7s\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.320961 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.321014 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.321640 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.322387 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-logs\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.323080 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.334737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-scripts\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.337548 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-config-data\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.349479 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5p7s\" (UniqueName: \"kubernetes.io/projected/008e1385-f07a-48d2-b940-9cdd96ab97dd-kube-api-access-b5p7s\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.351819 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.355714 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.371625 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.376004 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.406051 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.422778 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.422919 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.422964 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.423002 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwds\" (UniqueName: \"kubernetes.io/projected/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-kube-api-access-snwds\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.423697 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.424217 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.425038 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.474804 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.541912 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.542026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.542049 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.542089 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.542171 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.542207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.542254 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwds\" (UniqueName: \"kubernetes.io/projected/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-kube-api-access-snwds\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.551184 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.551615 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.553291 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.553399 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.584256 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.584964 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.589302 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.592337 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.608885 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwds\" (UniqueName: \"kubernetes.io/projected/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-kube-api-access-snwds\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.661080 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.661652 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:31:52 crc kubenswrapper[4688]: E1125 12:31:52.662249 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="d81bade4-3533-46e6-bc1c-6ebc6f3580f1" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.704122 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.753803 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c71f5edc-39d9-4386-85e5-e304ee06f318" path="/var/lib/kubelet/pods/c71f5edc-39d9-4386-85e5-e304ee06f318/volumes" Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.890181 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" event={"ID":"b8b45337-3ba9-48f5-850a-bc66e7178054","Type":"ContainerStarted","Data":"4e41e7f2c53f72f3c1a65464c2b06498743e2614cdc08c0a791dcc5e93657da0"} Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.910332 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rzvvs" event={"ID":"5b031476-5e95-46a9-8774-4073f647cb7a","Type":"ContainerStarted","Data":"0ad0e260ab3ea214e048cb6382b5c44878c7744e549c8ba3ef8f8c9d65af8d0d"} Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.956989 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wctjb" event={"ID":"4297ef88-82df-476f-90f6-e87b26dae1fd","Type":"ContainerStarted","Data":"df4fd5256d892ee40d4f1c325961a17dbc3010bcafdb5844d6a5a551a53444ad"} Nov 25 12:31:52 crc kubenswrapper[4688]: I1125 12:31:52.995187 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-wctjb" podStartSLOduration=2.995172511 podStartE2EDuration="2.995172511s" podCreationTimestamp="2025-11-25 12:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:31:52.992861619 +0000 UTC m=+1063.102490487" watchObservedRunningTime="2025-11-25 12:31:52.995172511 +0000 UTC m=+1063.104801379" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.026916 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8kpvb" event={"ID":"8659d06d-b50f-4258-9eb8-44b1de249e34","Type":"ContainerStarted","Data":"fde7330c569918087a32cb1e9b2bfbccf210dd7c3e6a7529008e72b5f6d55ae3"} Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.044098 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-crb7s" event={"ID":"05379dbe-faf8-4ac1-a032-40f31cb4e457","Type":"ContainerStarted","Data":"dd6b7c6cf4e95fe28a966419461f86dcb392b53f82af8934817bedfc5492132f"} Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.048308 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" event={"ID":"4e944f85-67c1-480a-8656-e34aba801d33","Type":"ContainerStarted","Data":"3728aaa28ab1ad5473403f5d80b63432ef73dc3ddff7e630348d1a09d6572357"} Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.053394 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8kpvb" podStartSLOduration=4.053368464 podStartE2EDuration="4.053368464s" podCreationTimestamp="2025-11-25 12:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:31:53.045585255 +0000 UTC m=+1063.155214143" watchObservedRunningTime="2025-11-25 12:31:53.053368464 +0000 UTC m=+1063.162997332" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.060003 4688 generic.go:334] "Generic (PLEG): container finished" podID="94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" containerID="eda8ede2ca57b6aa53f66607fb026055e5c96be9f8fdefe7992f0ff93604bb49" exitCode=0 Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.060094 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.061709 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" event={"ID":"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74","Type":"ContainerDied","Data":"eda8ede2ca57b6aa53f66607fb026055e5c96be9f8fdefe7992f0ff93604bb49"} Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.199906 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.261775 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.275496 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-scripts\") pod \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.275876 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-logs\") pod \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.275943 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.275970 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-httpd-run\") pod \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.276038 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-config-data\") pod \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.276141 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snwds\" (UniqueName: \"kubernetes.io/projected/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-kube-api-access-snwds\") pod \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.276166 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-combined-ca-bundle\") pod \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\" (UID: \"d81bade4-3533-46e6-bc1c-6ebc6f3580f1\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.280438 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d81bade4-3533-46e6-bc1c-6ebc6f3580f1" (UID: "d81bade4-3533-46e6-bc1c-6ebc6f3580f1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.280736 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-logs" (OuterVolumeSpecName: "logs") pod "d81bade4-3533-46e6-bc1c-6ebc6f3580f1" (UID: "d81bade4-3533-46e6-bc1c-6ebc6f3580f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.281193 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d81bade4-3533-46e6-bc1c-6ebc6f3580f1" (UID: "d81bade4-3533-46e6-bc1c-6ebc6f3580f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.284253 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "d81bade4-3533-46e6-bc1c-6ebc6f3580f1" (UID: "d81bade4-3533-46e6-bc1c-6ebc6f3580f1"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.286199 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-kube-api-access-snwds" (OuterVolumeSpecName: "kube-api-access-snwds") pod "d81bade4-3533-46e6-bc1c-6ebc6f3580f1" (UID: "d81bade4-3533-46e6-bc1c-6ebc6f3580f1"). InnerVolumeSpecName "kube-api-access-snwds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.286199 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-config-data" (OuterVolumeSpecName: "config-data") pod "d81bade4-3533-46e6-bc1c-6ebc6f3580f1" (UID: "d81bade4-3533-46e6-bc1c-6ebc6f3580f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.288626 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-scripts" (OuterVolumeSpecName: "scripts") pod "d81bade4-3533-46e6-bc1c-6ebc6f3580f1" (UID: "d81bade4-3533-46e6-bc1c-6ebc6f3580f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: W1125 12:31:53.309681 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod008e1385_f07a_48d2_b940_9cdd96ab97dd.slice/crio-759d94ea26bcfc4cc7ce624e27852515ad43bda1d08232631546d6569222c599 WatchSource:0}: Error finding container 759d94ea26bcfc4cc7ce624e27852515ad43bda1d08232631546d6569222c599: Status 404 returned error can't find the container with id 759d94ea26bcfc4cc7ce624e27852515ad43bda1d08232631546d6569222c599 Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.381769 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.381808 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snwds\" (UniqueName: \"kubernetes.io/projected/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-kube-api-access-snwds\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.381820 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.381831 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.381843 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.381882 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.381894 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81bade4-3533-46e6-bc1c-6ebc6f3580f1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.425881 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.484219 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.616385 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.691035 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-swift-storage-0\") pod \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.691124 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-sb\") pod \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.691179 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-nb\") pod \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.691353 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-config\") pod \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.691423 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8vpv\" (UniqueName: \"kubernetes.io/projected/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-kube-api-access-l8vpv\") pod \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.691509 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-svc\") pod \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\" (UID: \"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74\") " Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.705638 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-kube-api-access-l8vpv" (OuterVolumeSpecName: "kube-api-access-l8vpv") pod "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" (UID: "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74"). InnerVolumeSpecName "kube-api-access-l8vpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.740494 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" (UID: "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.747982 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-config" (OuterVolumeSpecName: "config") pod "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" (UID: "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.748900 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" (UID: "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.758835 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" (UID: "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.770370 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" (UID: "94acb5b0-3caa-4c3a-b1b4-22593c7bbb74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.795008 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.795044 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8vpv\" (UniqueName: \"kubernetes.io/projected/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-kube-api-access-l8vpv\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.795079 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.795087 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.795095 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:53 crc kubenswrapper[4688]: I1125 12:31:53.795103 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.075319 4688 generic.go:334] "Generic (PLEG): container finished" podID="4e944f85-67c1-480a-8656-e34aba801d33" containerID="749bad20d0dd7040a60eda6d792dd229e3d794b0bab92f0ce239ef3ca0204f1b" exitCode=0 Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.075399 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" event={"ID":"4e944f85-67c1-480a-8656-e34aba801d33","Type":"ContainerDied","Data":"749bad20d0dd7040a60eda6d792dd229e3d794b0bab92f0ce239ef3ca0204f1b"} Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.075429 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" event={"ID":"4e944f85-67c1-480a-8656-e34aba801d33","Type":"ContainerStarted","Data":"4f864f0266c880a28f16b64e086e0f2f63cda713517b59167308d8ef6e07f1c2"} Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.076606 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.086078 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.086067 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-2xg2m" event={"ID":"94acb5b0-3caa-4c3a-b1b4-22593c7bbb74","Type":"ContainerDied","Data":"d5b33b774ed98e355b6ce05ee32e8e2f4fdd52f966e763566a5894ca0cd22f1f"} Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.086843 4688 scope.go:117] "RemoveContainer" containerID="eda8ede2ca57b6aa53f66607fb026055e5c96be9f8fdefe7992f0ff93604bb49" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.090726 4688 generic.go:334] "Generic (PLEG): container finished" podID="b8b45337-3ba9-48f5-850a-bc66e7178054" containerID="42f1268011fd08175c16525720f48011026b43790944acd7f2bffbeabdcc9301" exitCode=0 Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.090797 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" event={"ID":"b8b45337-3ba9-48f5-850a-bc66e7178054","Type":"ContainerDied","Data":"42f1268011fd08175c16525720f48011026b43790944acd7f2bffbeabdcc9301"} Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.096386 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"008e1385-f07a-48d2-b940-9cdd96ab97dd","Type":"ContainerStarted","Data":"759d94ea26bcfc4cc7ce624e27852515ad43bda1d08232631546d6569222c599"} Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.096901 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.111344 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" podStartSLOduration=3.111320238 podStartE2EDuration="3.111320238s" podCreationTimestamp="2025-11-25 12:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:31:54.105895743 +0000 UTC m=+1064.215524611" watchObservedRunningTime="2025-11-25 12:31:54.111320238 +0000 UTC m=+1064.220949106" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.351943 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-2xg2m"] Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.382278 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-2xg2m"] Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.399140 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.450016 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.473611 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:31:54 crc kubenswrapper[4688]: E1125 12:31:54.474114 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" containerName="init" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.474133 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" containerName="init" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.474342 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" containerName="init" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.475391 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.485153 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.496685 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.520790 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.520890 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-logs\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.520959 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.521080 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.521177 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.521195 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmxsg\" (UniqueName: \"kubernetes.io/projected/89fa03a9-0da8-4141-a2af-84c5e8f2554c-kube-api-access-rmxsg\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.521213 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.592602 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.624496 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmxsg\" (UniqueName: \"kubernetes.io/projected/89fa03a9-0da8-4141-a2af-84c5e8f2554c-kube-api-access-rmxsg\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.624562 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.624585 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.624669 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.624696 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-logs\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.624771 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.624810 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.626292 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-logs\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.626891 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.627225 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.632991 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.633404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.655224 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmxsg\" (UniqueName: \"kubernetes.io/projected/89fa03a9-0da8-4141-a2af-84c5e8f2554c-kube-api-access-rmxsg\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.656047 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.686829 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.727360 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-sb\") pod \"b8b45337-3ba9-48f5-850a-bc66e7178054\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.727411 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-nb\") pod \"b8b45337-3ba9-48f5-850a-bc66e7178054\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.727639 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m48fm\" (UniqueName: \"kubernetes.io/projected/b8b45337-3ba9-48f5-850a-bc66e7178054-kube-api-access-m48fm\") pod \"b8b45337-3ba9-48f5-850a-bc66e7178054\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.727739 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-swift-storage-0\") pod \"b8b45337-3ba9-48f5-850a-bc66e7178054\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.727827 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-svc\") pod \"b8b45337-3ba9-48f5-850a-bc66e7178054\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.727881 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-config\") pod \"b8b45337-3ba9-48f5-850a-bc66e7178054\" (UID: \"b8b45337-3ba9-48f5-850a-bc66e7178054\") " Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.737835 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b45337-3ba9-48f5-850a-bc66e7178054-kube-api-access-m48fm" (OuterVolumeSpecName: "kube-api-access-m48fm") pod "b8b45337-3ba9-48f5-850a-bc66e7178054" (UID: "b8b45337-3ba9-48f5-850a-bc66e7178054"). InnerVolumeSpecName "kube-api-access-m48fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.761162 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b8b45337-3ba9-48f5-850a-bc66e7178054" (UID: "b8b45337-3ba9-48f5-850a-bc66e7178054"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.764911 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b8b45337-3ba9-48f5-850a-bc66e7178054" (UID: "b8b45337-3ba9-48f5-850a-bc66e7178054"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.766934 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94acb5b0-3caa-4c3a-b1b4-22593c7bbb74" path="/var/lib/kubelet/pods/94acb5b0-3caa-4c3a-b1b4-22593c7bbb74/volumes" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.770566 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81bade4-3533-46e6-bc1c-6ebc6f3580f1" path="/var/lib/kubelet/pods/d81bade4-3533-46e6-bc1c-6ebc6f3580f1/volumes" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.771941 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-config" (OuterVolumeSpecName: "config") pod "b8b45337-3ba9-48f5-850a-bc66e7178054" (UID: "b8b45337-3ba9-48f5-850a-bc66e7178054"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.771918 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b8b45337-3ba9-48f5-850a-bc66e7178054" (UID: "b8b45337-3ba9-48f5-850a-bc66e7178054"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.788834 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b8b45337-3ba9-48f5-850a-bc66e7178054" (UID: "b8b45337-3ba9-48f5-850a-bc66e7178054"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.831076 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.831116 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.831161 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.831178 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m48fm\" (UniqueName: \"kubernetes.io/projected/b8b45337-3ba9-48f5-850a-bc66e7178054-kube-api-access-m48fm\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.831191 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.831234 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8b45337-3ba9-48f5-850a-bc66e7178054-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:31:54 crc kubenswrapper[4688]: I1125 12:31:54.886903 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:31:55 crc kubenswrapper[4688]: I1125 12:31:55.124189 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" Nov 25 12:31:55 crc kubenswrapper[4688]: I1125 12:31:55.127019 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-td7pk" event={"ID":"b8b45337-3ba9-48f5-850a-bc66e7178054","Type":"ContainerDied","Data":"4e41e7f2c53f72f3c1a65464c2b06498743e2614cdc08c0a791dcc5e93657da0"} Nov 25 12:31:55 crc kubenswrapper[4688]: I1125 12:31:55.127070 4688 scope.go:117] "RemoveContainer" containerID="42f1268011fd08175c16525720f48011026b43790944acd7f2bffbeabdcc9301" Nov 25 12:31:55 crc kubenswrapper[4688]: I1125 12:31:55.230155 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-td7pk"] Nov 25 12:31:55 crc kubenswrapper[4688]: I1125 12:31:55.256213 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-td7pk"] Nov 25 12:31:55 crc kubenswrapper[4688]: E1125 12:31:55.315489 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8b45337_3ba9_48f5_850a_bc66e7178054.slice/crio-4e41e7f2c53f72f3c1a65464c2b06498743e2614cdc08c0a791dcc5e93657da0\": RecentStats: unable to find data in memory cache]" Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:31:55.555405 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:32:03 crc kubenswrapper[4688]: W1125 12:31:55.559390 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89fa03a9_0da8_4141_a2af_84c5e8f2554c.slice/crio-863e7876c8d245529d5fb5a31726b5bf15dad2932886aa7cc67f750048779705 WatchSource:0}: Error finding container 863e7876c8d245529d5fb5a31726b5bf15dad2932886aa7cc67f750048779705: Status 404 returned error can't find the container with id 863e7876c8d245529d5fb5a31726b5bf15dad2932886aa7cc67f750048779705 Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:31:56.145255 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"008e1385-f07a-48d2-b940-9cdd96ab97dd","Type":"ContainerStarted","Data":"b164c0670ffb246358695353d5cfbd0aab3d885ff9e4adc73678cd7f3879b274"} Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:31:56.147872 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89fa03a9-0da8-4141-a2af-84c5e8f2554c","Type":"ContainerStarted","Data":"863e7876c8d245529d5fb5a31726b5bf15dad2932886aa7cc67f750048779705"} Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:31:56.773088 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b45337-3ba9-48f5-850a-bc66e7178054" path="/var/lib/kubelet/pods/b8b45337-3ba9-48f5-850a-bc66e7178054/volumes" Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:31:57.164238 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89fa03a9-0da8-4141-a2af-84c5e8f2554c","Type":"ContainerStarted","Data":"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d"} Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:31:59.909583 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:01.687844 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:01.740417 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wdtng"] Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:01.740791 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-wdtng" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="dnsmasq-dns" containerID="cri-o://d0030fd626cdb47185577da1466f388c66501f8de7eec3ff64bf3e2e7f22f489" gracePeriod=10 Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:02.207757 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"008e1385-f07a-48d2-b940-9cdd96ab97dd","Type":"ContainerStarted","Data":"46bdd59eeac4260ec2d3c8d234964a162c87a6630f0b8a9876d11d7ab680db2e"} Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:03.221379 4688 generic.go:334] "Generic (PLEG): container finished" podID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerID="d0030fd626cdb47185577da1466f388c66501f8de7eec3ff64bf3e2e7f22f489" exitCode=0 Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:03.221438 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wdtng" event={"ID":"168ae79c-b5b7-41f7-9443-96af2e8ad91c","Type":"ContainerDied","Data":"d0030fd626cdb47185577da1466f388c66501f8de7eec3ff64bf3e2e7f22f489"} Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:03.221845 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerName="glance-log" containerID="cri-o://b164c0670ffb246358695353d5cfbd0aab3d885ff9e4adc73678cd7f3879b274" gracePeriod=30 Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:03.221938 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerName="glance-httpd" containerID="cri-o://46bdd59eeac4260ec2d3c8d234964a162c87a6630f0b8a9876d11d7ab680db2e" gracePeriod=30 Nov 25 12:32:03 crc kubenswrapper[4688]: I1125 12:32:03.254105 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.254073459 podStartE2EDuration="13.254073459s" podCreationTimestamp="2025-11-25 12:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:03.252788266 +0000 UTC m=+1073.362417134" watchObservedRunningTime="2025-11-25 12:32:03.254073459 +0000 UTC m=+1073.363702337" Nov 25 12:32:04 crc kubenswrapper[4688]: I1125 12:32:04.233273 4688 generic.go:334] "Generic (PLEG): container finished" podID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerID="46bdd59eeac4260ec2d3c8d234964a162c87a6630f0b8a9876d11d7ab680db2e" exitCode=0 Nov 25 12:32:04 crc kubenswrapper[4688]: I1125 12:32:04.233630 4688 generic.go:334] "Generic (PLEG): container finished" podID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerID="b164c0670ffb246358695353d5cfbd0aab3d885ff9e4adc73678cd7f3879b274" exitCode=143 Nov 25 12:32:04 crc kubenswrapper[4688]: I1125 12:32:04.233379 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"008e1385-f07a-48d2-b940-9cdd96ab97dd","Type":"ContainerDied","Data":"46bdd59eeac4260ec2d3c8d234964a162c87a6630f0b8a9876d11d7ab680db2e"} Nov 25 12:32:04 crc kubenswrapper[4688]: I1125 12:32:04.233725 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"008e1385-f07a-48d2-b940-9cdd96ab97dd","Type":"ContainerDied","Data":"b164c0670ffb246358695353d5cfbd0aab3d885ff9e4adc73678cd7f3879b274"} Nov 25 12:32:04 crc kubenswrapper[4688]: I1125 12:32:04.235705 4688 generic.go:334] "Generic (PLEG): container finished" podID="8659d06d-b50f-4258-9eb8-44b1de249e34" containerID="fde7330c569918087a32cb1e9b2bfbccf210dd7c3e6a7529008e72b5f6d55ae3" exitCode=0 Nov 25 12:32:04 crc kubenswrapper[4688]: I1125 12:32:04.235748 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8kpvb" event={"ID":"8659d06d-b50f-4258-9eb8-44b1de249e34","Type":"ContainerDied","Data":"fde7330c569918087a32cb1e9b2bfbccf210dd7c3e6a7529008e72b5f6d55ae3"} Nov 25 12:32:05 crc kubenswrapper[4688]: I1125 12:32:05.020729 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wdtng" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Nov 25 12:32:10 crc kubenswrapper[4688]: I1125 12:32:10.020355 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wdtng" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Nov 25 12:32:15 crc kubenswrapper[4688]: I1125 12:32:15.020516 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wdtng" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Nov 25 12:32:15 crc kubenswrapper[4688]: I1125 12:32:15.021135 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:32:15 crc kubenswrapper[4688]: E1125 12:32:15.084374 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 25 12:32:15 crc kubenswrapper[4688]: E1125 12:32:15.084544 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxhq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-crb7s_openstack(05379dbe-faf8-4ac1-a032-40f31cb4e457): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:32:15 crc kubenswrapper[4688]: E1125 12:32:15.088249 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-crb7s" podUID="05379dbe-faf8-4ac1-a032-40f31cb4e457" Nov 25 12:32:15 crc kubenswrapper[4688]: E1125 12:32:15.329596 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-crb7s" podUID="05379dbe-faf8-4ac1-a032-40f31cb4e457" Nov 25 12:32:16 crc kubenswrapper[4688]: E1125 12:32:16.985283 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 25 12:32:16 crc kubenswrapper[4688]: E1125 12:32:16.986216 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbcdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-rzvvs_openstack(5b031476-5e95-46a9-8774-4073f647cb7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:32:16 crc kubenswrapper[4688]: E1125 12:32:16.987414 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-rzvvs" podUID="5b031476-5e95-46a9-8774-4073f647cb7a" Nov 25 12:32:17 crc kubenswrapper[4688]: E1125 12:32:17.348082 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-rzvvs" podUID="5b031476-5e95-46a9-8774-4073f647cb7a" Nov 25 12:32:20 crc kubenswrapper[4688]: E1125 12:32:20.975915 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 25 12:32:20 crc kubenswrapper[4688]: E1125 12:32:20.976409 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n554h5cdh67dh575h68bh85h554h5dh586h6dhc8h548h87h655h686h5bh8dh5fdhfbh664h64h5bh65ch56bh597h64bh587h8dh5b6h8ch76h695q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dh6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2611fb13-90fc-4310-a8dc-c224f4689a9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.056159 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.166966 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df2hr\" (UniqueName: \"kubernetes.io/projected/8659d06d-b50f-4258-9eb8-44b1de249e34-kube-api-access-df2hr\") pod \"8659d06d-b50f-4258-9eb8-44b1de249e34\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.167022 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-credential-keys\") pod \"8659d06d-b50f-4258-9eb8-44b1de249e34\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.167141 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-fernet-keys\") pod \"8659d06d-b50f-4258-9eb8-44b1de249e34\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.167206 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-scripts\") pod \"8659d06d-b50f-4258-9eb8-44b1de249e34\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.167244 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-combined-ca-bundle\") pod \"8659d06d-b50f-4258-9eb8-44b1de249e34\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.167967 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-config-data\") pod \"8659d06d-b50f-4258-9eb8-44b1de249e34\" (UID: \"8659d06d-b50f-4258-9eb8-44b1de249e34\") " Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.173128 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8659d06d-b50f-4258-9eb8-44b1de249e34-kube-api-access-df2hr" (OuterVolumeSpecName: "kube-api-access-df2hr") pod "8659d06d-b50f-4258-9eb8-44b1de249e34" (UID: "8659d06d-b50f-4258-9eb8-44b1de249e34"). InnerVolumeSpecName "kube-api-access-df2hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.173201 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8659d06d-b50f-4258-9eb8-44b1de249e34" (UID: "8659d06d-b50f-4258-9eb8-44b1de249e34"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.187481 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8659d06d-b50f-4258-9eb8-44b1de249e34" (UID: "8659d06d-b50f-4258-9eb8-44b1de249e34"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.187576 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-scripts" (OuterVolumeSpecName: "scripts") pod "8659d06d-b50f-4258-9eb8-44b1de249e34" (UID: "8659d06d-b50f-4258-9eb8-44b1de249e34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.194617 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8659d06d-b50f-4258-9eb8-44b1de249e34" (UID: "8659d06d-b50f-4258-9eb8-44b1de249e34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.197250 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-config-data" (OuterVolumeSpecName: "config-data") pod "8659d06d-b50f-4258-9eb8-44b1de249e34" (UID: "8659d06d-b50f-4258-9eb8-44b1de249e34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.270274 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.270313 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df2hr\" (UniqueName: \"kubernetes.io/projected/8659d06d-b50f-4258-9eb8-44b1de249e34-kube-api-access-df2hr\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.270328 4688 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.270341 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.270352 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.270363 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8659d06d-b50f-4258-9eb8-44b1de249e34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.539970 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8kpvb" event={"ID":"8659d06d-b50f-4258-9eb8-44b1de249e34","Type":"ContainerDied","Data":"15dde4a08959bdb5b427fc73d72271c4fcea718022d5d77eb385027992e6c913"} Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.540003 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15dde4a08959bdb5b427fc73d72271c4fcea718022d5d77eb385027992e6c913" Nov 25 12:32:21 crc kubenswrapper[4688]: I1125 12:32:21.540048 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8kpvb" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.133948 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8kpvb"] Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.141069 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8kpvb"] Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.231000 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8nw4p"] Nov 25 12:32:22 crc kubenswrapper[4688]: E1125 12:32:22.233672 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8659d06d-b50f-4258-9eb8-44b1de249e34" containerName="keystone-bootstrap" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.233842 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8659d06d-b50f-4258-9eb8-44b1de249e34" containerName="keystone-bootstrap" Nov 25 12:32:22 crc kubenswrapper[4688]: E1125 12:32:22.233911 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b45337-3ba9-48f5-850a-bc66e7178054" containerName="init" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.233962 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b45337-3ba9-48f5-850a-bc66e7178054" containerName="init" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.234263 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8659d06d-b50f-4258-9eb8-44b1de249e34" containerName="keystone-bootstrap" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.234324 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b45337-3ba9-48f5-850a-bc66e7178054" containerName="init" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.234994 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.238564 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.238738 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.238814 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.238863 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f7h8n" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.239048 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.239664 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8nw4p"] Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.394082 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-credential-keys\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.394129 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-scripts\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.394345 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-combined-ca-bundle\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.394401 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkhg\" (UniqueName: \"kubernetes.io/projected/0055b409-5571-400e-a4c1-46a58c368692-kube-api-access-wxkhg\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.394458 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-config-data\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.394489 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-fernet-keys\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.496633 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-credential-keys\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.497014 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-scripts\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.497053 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-combined-ca-bundle\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.497163 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkhg\" (UniqueName: \"kubernetes.io/projected/0055b409-5571-400e-a4c1-46a58c368692-kube-api-access-wxkhg\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.497213 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-config-data\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.497264 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-fernet-keys\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.502040 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-credential-keys\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.502722 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-scripts\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.502761 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-fernet-keys\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.503321 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-config-data\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.506024 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-combined-ca-bundle\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.517777 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkhg\" (UniqueName: \"kubernetes.io/projected/0055b409-5571-400e-a4c1-46a58c368692-kube-api-access-wxkhg\") pod \"keystone-bootstrap-8nw4p\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.552734 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.585758 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.585803 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 12:32:22 crc kubenswrapper[4688]: I1125 12:32:22.748804 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8659d06d-b50f-4258-9eb8-44b1de249e34" path="/var/lib/kubelet/pods/8659d06d-b50f-4258-9eb8-44b1de249e34/volumes" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.031045 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.031601 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-slb7z_openstack(588d841f-905c-42bb-9242-2e86b7e66877): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.032884 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-slb7z" podUID="588d841f-905c-42bb-9242-2e86b7e66877" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.102725 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.113086 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128209 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwnx5\" (UniqueName: \"kubernetes.io/projected/168ae79c-b5b7-41f7-9443-96af2e8ad91c-kube-api-access-jwnx5\") pod \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128289 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-logs\") pod \"008e1385-f07a-48d2-b940-9cdd96ab97dd\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128426 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-scripts\") pod \"008e1385-f07a-48d2-b940-9cdd96ab97dd\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128485 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-sb\") pod \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128546 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5p7s\" (UniqueName: \"kubernetes.io/projected/008e1385-f07a-48d2-b940-9cdd96ab97dd-kube-api-access-b5p7s\") pod \"008e1385-f07a-48d2-b940-9cdd96ab97dd\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128601 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-dns-svc\") pod \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128650 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-config-data\") pod \"008e1385-f07a-48d2-b940-9cdd96ab97dd\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128698 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-nb\") pod \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128728 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-config\") pod \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\" (UID: \"168ae79c-b5b7-41f7-9443-96af2e8ad91c\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128807 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-combined-ca-bundle\") pod \"008e1385-f07a-48d2-b940-9cdd96ab97dd\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128895 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-httpd-run\") pod \"008e1385-f07a-48d2-b940-9cdd96ab97dd\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.128952 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"008e1385-f07a-48d2-b940-9cdd96ab97dd\" (UID: \"008e1385-f07a-48d2-b940-9cdd96ab97dd\") " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.132517 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-logs" (OuterVolumeSpecName: "logs") pod "008e1385-f07a-48d2-b940-9cdd96ab97dd" (UID: "008e1385-f07a-48d2-b940-9cdd96ab97dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.133270 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "008e1385-f07a-48d2-b940-9cdd96ab97dd" (UID: "008e1385-f07a-48d2-b940-9cdd96ab97dd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.136209 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168ae79c-b5b7-41f7-9443-96af2e8ad91c-kube-api-access-jwnx5" (OuterVolumeSpecName: "kube-api-access-jwnx5") pod "168ae79c-b5b7-41f7-9443-96af2e8ad91c" (UID: "168ae79c-b5b7-41f7-9443-96af2e8ad91c"). InnerVolumeSpecName "kube-api-access-jwnx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.142795 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-scripts" (OuterVolumeSpecName: "scripts") pod "008e1385-f07a-48d2-b940-9cdd96ab97dd" (UID: "008e1385-f07a-48d2-b940-9cdd96ab97dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.150691 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "008e1385-f07a-48d2-b940-9cdd96ab97dd" (UID: "008e1385-f07a-48d2-b940-9cdd96ab97dd"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.167180 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008e1385-f07a-48d2-b940-9cdd96ab97dd-kube-api-access-b5p7s" (OuterVolumeSpecName: "kube-api-access-b5p7s") pod "008e1385-f07a-48d2-b940-9cdd96ab97dd" (UID: "008e1385-f07a-48d2-b940-9cdd96ab97dd"). InnerVolumeSpecName "kube-api-access-b5p7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.193140 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "168ae79c-b5b7-41f7-9443-96af2e8ad91c" (UID: "168ae79c-b5b7-41f7-9443-96af2e8ad91c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.198378 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "008e1385-f07a-48d2-b940-9cdd96ab97dd" (UID: "008e1385-f07a-48d2-b940-9cdd96ab97dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.202467 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-config" (OuterVolumeSpecName: "config") pod "168ae79c-b5b7-41f7-9443-96af2e8ad91c" (UID: "168ae79c-b5b7-41f7-9443-96af2e8ad91c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.208088 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "168ae79c-b5b7-41f7-9443-96af2e8ad91c" (UID: "168ae79c-b5b7-41f7-9443-96af2e8ad91c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.224579 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-config-data" (OuterVolumeSpecName: "config-data") pod "008e1385-f07a-48d2-b940-9cdd96ab97dd" (UID: "008e1385-f07a-48d2-b940-9cdd96ab97dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.231883 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.231915 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5p7s\" (UniqueName: \"kubernetes.io/projected/008e1385-f07a-48d2-b940-9cdd96ab97dd-kube-api-access-b5p7s\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.231926 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.231936 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.231944 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.231953 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.231962 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008e1385-f07a-48d2-b940-9cdd96ab97dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.231970 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.232019 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.232031 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwnx5\" (UniqueName: \"kubernetes.io/projected/168ae79c-b5b7-41f7-9443-96af2e8ad91c-kube-api-access-jwnx5\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.232039 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/008e1385-f07a-48d2-b940-9cdd96ab97dd-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.236782 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "168ae79c-b5b7-41f7-9443-96af2e8ad91c" (UID: "168ae79c-b5b7-41f7-9443-96af2e8ad91c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.253597 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.333934 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.333968 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168ae79c-b5b7-41f7-9443-96af2e8ad91c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.420720 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.421197 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nwq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tgslp_openstack(ff6fe51c-f968-4dd0-93c2-b355ac6c27c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.422417 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-tgslp" podUID="ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.578920 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"008e1385-f07a-48d2-b940-9cdd96ab97dd","Type":"ContainerDied","Data":"759d94ea26bcfc4cc7ce624e27852515ad43bda1d08232631546d6569222c599"} Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.579286 4688 scope.go:117] "RemoveContainer" containerID="46bdd59eeac4260ec2d3c8d234964a162c87a6630f0b8a9876d11d7ab680db2e" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.578947 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.587466 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wdtng" event={"ID":"168ae79c-b5b7-41f7-9443-96af2e8ad91c","Type":"ContainerDied","Data":"b51023158b26afb07f66de62cb5301e74d5e7db65d5cb48ffcb71d37e9fd6e61"} Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.587513 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wdtng" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.588921 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-tgslp" podUID="ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.589133 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-slb7z" podUID="588d841f-905c-42bb-9242-2e86b7e66877" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.636249 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wdtng"] Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.641583 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wdtng"] Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.650248 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.656708 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.736309 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.736712 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="init" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.736735 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="init" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.736762 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="dnsmasq-dns" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.736771 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="dnsmasq-dns" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.736800 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerName="glance-log" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.736809 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerName="glance-log" Nov 25 12:32:24 crc kubenswrapper[4688]: E1125 12:32:24.736827 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerName="glance-httpd" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.736834 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerName="glance-httpd" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.737045 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="dnsmasq-dns" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.737070 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerName="glance-httpd" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.737098 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" containerName="glance-log" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.738213 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.740861 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.741030 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.753959 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008e1385-f07a-48d2-b940-9cdd96ab97dd" path="/var/lib/kubelet/pods/008e1385-f07a-48d2-b940-9cdd96ab97dd/volumes" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.754612 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" path="/var/lib/kubelet/pods/168ae79c-b5b7-41f7-9443-96af2e8ad91c/volumes" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.755193 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.863185 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8nw4p"] Nov 25 12:32:24 crc kubenswrapper[4688]: W1125 12:32:24.883803 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0055b409_5571_400e_a4c1_46a58c368692.slice/crio-f600a4ce9e9e28a9bc0fbcbf4f2f500be7eca62ea1e2202c5767d3348b1a16f3 WatchSource:0}: Error finding container f600a4ce9e9e28a9bc0fbcbf4f2f500be7eca62ea1e2202c5767d3348b1a16f3: Status 404 returned error can't find the container with id f600a4ce9e9e28a9bc0fbcbf4f2f500be7eca62ea1e2202c5767d3348b1a16f3 Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.935827 4688 scope.go:117] "RemoveContainer" containerID="b164c0670ffb246358695353d5cfbd0aab3d885ff9e4adc73678cd7f3879b274" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.944411 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.944574 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7299\" (UniqueName: \"kubernetes.io/projected/b276d21b-cfa5-4b99-98f4-c75e85233b0c-kube-api-access-s7299\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.944599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.944643 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.944692 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.945397 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-logs\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.945494 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.945621 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:24 crc kubenswrapper[4688]: I1125 12:32:24.969086 4688 scope.go:117] "RemoveContainer" containerID="d0030fd626cdb47185577da1466f388c66501f8de7eec3ff64bf3e2e7f22f489" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.020093 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wdtng" podUID="168ae79c-b5b7-41f7-9443-96af2e8ad91c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.047346 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7299\" (UniqueName: \"kubernetes.io/projected/b276d21b-cfa5-4b99-98f4-c75e85233b0c-kube-api-access-s7299\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.047770 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.047808 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.047836 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.048092 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-logs\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.048126 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.048156 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.048196 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.049254 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.050634 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.050667 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-logs\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.058069 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.058382 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.059433 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.061243 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.061290 4688 scope.go:117] "RemoveContainer" containerID="6134679f17358d989e6121715effedf3d8d07a30aca8318f74f4d6ef1a5dec14" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.073284 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7299\" (UniqueName: \"kubernetes.io/projected/b276d21b-cfa5-4b99-98f4-c75e85233b0c-kube-api-access-s7299\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.097430 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.359987 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.598740 4688 generic.go:334] "Generic (PLEG): container finished" podID="4297ef88-82df-476f-90f6-e87b26dae1fd" containerID="df4fd5256d892ee40d4f1c325961a17dbc3010bcafdb5844d6a5a551a53444ad" exitCode=0 Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.599038 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wctjb" event={"ID":"4297ef88-82df-476f-90f6-e87b26dae1fd","Type":"ContainerDied","Data":"df4fd5256d892ee40d4f1c325961a17dbc3010bcafdb5844d6a5a551a53444ad"} Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.600840 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8nw4p" event={"ID":"0055b409-5571-400e-a4c1-46a58c368692","Type":"ContainerStarted","Data":"dbf2bb8018c64875e0527dce58759f061f63b22162fe113d611571fd5be820bc"} Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.600895 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8nw4p" event={"ID":"0055b409-5571-400e-a4c1-46a58c368692","Type":"ContainerStarted","Data":"f600a4ce9e9e28a9bc0fbcbf4f2f500be7eca62ea1e2202c5767d3348b1a16f3"} Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.605561 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2611fb13-90fc-4310-a8dc-c224f4689a9f","Type":"ContainerStarted","Data":"4de5225a1b6ae02445eebb45e3f94dac16393d84450e58aa87720368e4a74838"} Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.608273 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89fa03a9-0da8-4141-a2af-84c5e8f2554c","Type":"ContainerStarted","Data":"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018"} Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.608386 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerName="glance-log" containerID="cri-o://12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d" gracePeriod=30 Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.608413 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerName="glance-httpd" containerID="cri-o://1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018" gracePeriod=30 Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.642561 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8nw4p" podStartSLOduration=3.642537722 podStartE2EDuration="3.642537722s" podCreationTimestamp="2025-11-25 12:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:25.633510839 +0000 UTC m=+1095.743139707" watchObservedRunningTime="2025-11-25 12:32:25.642537722 +0000 UTC m=+1095.752166590" Nov 25 12:32:25 crc kubenswrapper[4688]: I1125 12:32:25.658610 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.658588783 podStartE2EDuration="31.658588783s" podCreationTimestamp="2025-11-25 12:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:25.654233186 +0000 UTC m=+1095.763862054" watchObservedRunningTime="2025-11-25 12:32:25.658588783 +0000 UTC m=+1095.768217651" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.144017 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.282255 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-scripts\") pod \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.282722 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-logs\") pod \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.282823 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-httpd-run\") pod \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.282883 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data\") pod \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.282927 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-combined-ca-bundle\") pod \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.282981 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmxsg\" (UniqueName: \"kubernetes.io/projected/89fa03a9-0da8-4141-a2af-84c5e8f2554c-kube-api-access-rmxsg\") pod \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.283011 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.283264 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89fa03a9-0da8-4141-a2af-84c5e8f2554c" (UID: "89fa03a9-0da8-4141-a2af-84c5e8f2554c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.283305 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-logs" (OuterVolumeSpecName: "logs") pod "89fa03a9-0da8-4141-a2af-84c5e8f2554c" (UID: "89fa03a9-0da8-4141-a2af-84c5e8f2554c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.283766 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.283807 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89fa03a9-0da8-4141-a2af-84c5e8f2554c-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.288295 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-scripts" (OuterVolumeSpecName: "scripts") pod "89fa03a9-0da8-4141-a2af-84c5e8f2554c" (UID: "89fa03a9-0da8-4141-a2af-84c5e8f2554c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.288678 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fa03a9-0da8-4141-a2af-84c5e8f2554c-kube-api-access-rmxsg" (OuterVolumeSpecName: "kube-api-access-rmxsg") pod "89fa03a9-0da8-4141-a2af-84c5e8f2554c" (UID: "89fa03a9-0da8-4141-a2af-84c5e8f2554c"). InnerVolumeSpecName "kube-api-access-rmxsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.296297 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "89fa03a9-0da8-4141-a2af-84c5e8f2554c" (UID: "89fa03a9-0da8-4141-a2af-84c5e8f2554c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 12:32:26 crc kubenswrapper[4688]: E1125 12:32:26.323911 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data podName:89fa03a9-0da8-4141-a2af-84c5e8f2554c nodeName:}" failed. No retries permitted until 2025-11-25 12:32:26.823884561 +0000 UTC m=+1096.933513429 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data") pod "89fa03a9-0da8-4141-a2af-84c5e8f2554c" (UID: "89fa03a9-0da8-4141-a2af-84c5e8f2554c") : error deleting /var/lib/kubelet/pods/89fa03a9-0da8-4141-a2af-84c5e8f2554c/volume-subpaths: remove /var/lib/kubelet/pods/89fa03a9-0da8-4141-a2af-84c5e8f2554c/volume-subpaths: no such file or directory Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.326639 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89fa03a9-0da8-4141-a2af-84c5e8f2554c" (UID: "89fa03a9-0da8-4141-a2af-84c5e8f2554c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.385892 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.385919 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmxsg\" (UniqueName: \"kubernetes.io/projected/89fa03a9-0da8-4141-a2af-84c5e8f2554c-kube-api-access-rmxsg\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.385947 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.385956 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.408200 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.487609 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.628149 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-crb7s" event={"ID":"05379dbe-faf8-4ac1-a032-40f31cb4e457","Type":"ContainerStarted","Data":"f5fce7caed6d2ee0655eab95b4631fce3b28f32bba732ab8c03a01cebf6c7d79"} Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.631338 4688 generic.go:334] "Generic (PLEG): container finished" podID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerID="1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018" exitCode=143 Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.631361 4688 generic.go:334] "Generic (PLEG): container finished" podID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerID="12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d" exitCode=143 Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.631376 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.631409 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89fa03a9-0da8-4141-a2af-84c5e8f2554c","Type":"ContainerDied","Data":"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018"} Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.631429 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89fa03a9-0da8-4141-a2af-84c5e8f2554c","Type":"ContainerDied","Data":"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d"} Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.631439 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89fa03a9-0da8-4141-a2af-84c5e8f2554c","Type":"ContainerDied","Data":"863e7876c8d245529d5fb5a31726b5bf15dad2932886aa7cc67f750048779705"} Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.631453 4688 scope.go:117] "RemoveContainer" containerID="1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.650848 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-crb7s" podStartSLOduration=2.462896405 podStartE2EDuration="36.650830002s" podCreationTimestamp="2025-11-25 12:31:50 +0000 UTC" firstStartedPulling="2025-11-25 12:31:52.02743808 +0000 UTC m=+1062.137066948" lastFinishedPulling="2025-11-25 12:32:26.215371687 +0000 UTC m=+1096.325000545" observedRunningTime="2025-11-25 12:32:26.648797197 +0000 UTC m=+1096.758426085" watchObservedRunningTime="2025-11-25 12:32:26.650830002 +0000 UTC m=+1096.760458870" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.670899 4688 scope.go:117] "RemoveContainer" containerID="12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.713720 4688 scope.go:117] "RemoveContainer" containerID="1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018" Nov 25 12:32:26 crc kubenswrapper[4688]: E1125 12:32:26.714509 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018\": container with ID starting with 1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018 not found: ID does not exist" containerID="1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.714814 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018"} err="failed to get container status \"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018\": rpc error: code = NotFound desc = could not find container \"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018\": container with ID starting with 1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018 not found: ID does not exist" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.714870 4688 scope.go:117] "RemoveContainer" containerID="12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d" Nov 25 12:32:26 crc kubenswrapper[4688]: E1125 12:32:26.715352 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d\": container with ID starting with 12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d not found: ID does not exist" containerID="12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.715764 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d"} err="failed to get container status \"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d\": rpc error: code = NotFound desc = could not find container \"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d\": container with ID starting with 12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d not found: ID does not exist" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.715797 4688 scope.go:117] "RemoveContainer" containerID="1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.716688 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018"} err="failed to get container status \"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018\": rpc error: code = NotFound desc = could not find container \"1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018\": container with ID starting with 1c483987b7d477699a19271db5be800d225e77dd5ded815c16bfd6e2be64c018 not found: ID does not exist" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.716713 4688 scope.go:117] "RemoveContainer" containerID="12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.717109 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d"} err="failed to get container status \"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d\": rpc error: code = NotFound desc = could not find container \"12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d\": container with ID starting with 12b8b1d25fcc545f9010ccd89f467404201dca1ff9aba06a8daafb8599e1a49d not found: ID does not exist" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.894077 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data\") pod \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\" (UID: \"89fa03a9-0da8-4141-a2af-84c5e8f2554c\") " Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.897295 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.898659 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data" (OuterVolumeSpecName: "config-data") pod "89fa03a9-0da8-4141-a2af-84c5e8f2554c" (UID: "89fa03a9-0da8-4141-a2af-84c5e8f2554c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.906147 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wctjb" Nov 25 12:32:26 crc kubenswrapper[4688]: W1125 12:32:26.928171 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb276d21b_cfa5_4b99_98f4_c75e85233b0c.slice/crio-6f6f01a6e7e4d37eac70e81564bf68a909d1ab334e7cf539b8b40f38d1d194e8 WatchSource:0}: Error finding container 6f6f01a6e7e4d37eac70e81564bf68a909d1ab334e7cf539b8b40f38d1d194e8: Status 404 returned error can't find the container with id 6f6f01a6e7e4d37eac70e81564bf68a909d1ab334e7cf539b8b40f38d1d194e8 Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.973207 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.983196 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:32:26 crc kubenswrapper[4688]: I1125 12:32:26.997406 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fa03a9-0da8-4141-a2af-84c5e8f2554c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.007675 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:32:27 crc kubenswrapper[4688]: E1125 12:32:27.008086 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerName="glance-log" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.008102 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerName="glance-log" Nov 25 12:32:27 crc kubenswrapper[4688]: E1125 12:32:27.008134 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4297ef88-82df-476f-90f6-e87b26dae1fd" containerName="neutron-db-sync" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.008142 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4297ef88-82df-476f-90f6-e87b26dae1fd" containerName="neutron-db-sync" Nov 25 12:32:27 crc kubenswrapper[4688]: E1125 12:32:27.008159 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerName="glance-httpd" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.008167 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerName="glance-httpd" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.008362 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4297ef88-82df-476f-90f6-e87b26dae1fd" containerName="neutron-db-sync" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.008383 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerName="glance-httpd" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.008405 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" containerName="glance-log" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.011656 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.013782 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.015730 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.017615 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.099059 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-748zq\" (UniqueName: \"kubernetes.io/projected/4297ef88-82df-476f-90f6-e87b26dae1fd-kube-api-access-748zq\") pod \"4297ef88-82df-476f-90f6-e87b26dae1fd\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.099374 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-config\") pod \"4297ef88-82df-476f-90f6-e87b26dae1fd\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.099497 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-combined-ca-bundle\") pod \"4297ef88-82df-476f-90f6-e87b26dae1fd\" (UID: \"4297ef88-82df-476f-90f6-e87b26dae1fd\") " Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.111181 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4297ef88-82df-476f-90f6-e87b26dae1fd-kube-api-access-748zq" (OuterVolumeSpecName: "kube-api-access-748zq") pod "4297ef88-82df-476f-90f6-e87b26dae1fd" (UID: "4297ef88-82df-476f-90f6-e87b26dae1fd"). InnerVolumeSpecName "kube-api-access-748zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.126883 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-config" (OuterVolumeSpecName: "config") pod "4297ef88-82df-476f-90f6-e87b26dae1fd" (UID: "4297ef88-82df-476f-90f6-e87b26dae1fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.139323 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4297ef88-82df-476f-90f6-e87b26dae1fd" (UID: "4297ef88-82df-476f-90f6-e87b26dae1fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.200947 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201011 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201047 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzqrf\" (UniqueName: \"kubernetes.io/projected/e00027ea-a0b8-4406-bb9f-5583cbec970f-kube-api-access-gzqrf\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201186 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201246 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201332 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201386 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201440 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201904 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201927 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4297ef88-82df-476f-90f6-e87b26dae1fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.201981 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-748zq\" (UniqueName: \"kubernetes.io/projected/4297ef88-82df-476f-90f6-e87b26dae1fd-kube-api-access-748zq\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.303074 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.303173 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.303218 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.303270 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzqrf\" (UniqueName: \"kubernetes.io/projected/e00027ea-a0b8-4406-bb9f-5583cbec970f-kube-api-access-gzqrf\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.303336 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.303365 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.303413 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.303451 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.304106 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.313407 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.316391 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-logs\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.318174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.321472 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.331382 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.332082 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.335615 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.338185 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzqrf\" (UniqueName: \"kubernetes.io/projected/e00027ea-a0b8-4406-bb9f-5583cbec970f-kube-api-access-gzqrf\") pod \"glance-default-internal-api-0\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.499979 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.649793 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wctjb" event={"ID":"4297ef88-82df-476f-90f6-e87b26dae1fd","Type":"ContainerDied","Data":"41e6d838dd00949bfab6b889efc9c62d42dd6abf510ccec3112981ca08a636cc"} Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.649842 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41e6d838dd00949bfab6b889efc9c62d42dd6abf510ccec3112981ca08a636cc" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.649810 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wctjb" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.652511 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b276d21b-cfa5-4b99-98f4-c75e85233b0c","Type":"ContainerStarted","Data":"9b964cf6f5fc61d17820fef569897efba3de46a97d23f2ca4905b64dbf4884b7"} Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.652586 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b276d21b-cfa5-4b99-98f4-c75e85233b0c","Type":"ContainerStarted","Data":"6f6f01a6e7e4d37eac70e81564bf68a909d1ab334e7cf539b8b40f38d1d194e8"} Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.851642 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lqdmt"] Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.853479 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.876061 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lqdmt"] Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.965936 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64d655c956-kd82z"] Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.967794 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.977056 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.977271 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.977435 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.977771 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-86cvc" Nov 25 12:32:27 crc kubenswrapper[4688]: I1125 12:32:27.987449 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64d655c956-kd82z"] Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.017553 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.017776 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.017992 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.018187 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.018322 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-config\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.018415 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wkr2\" (UniqueName: \"kubernetes.io/projected/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-kube-api-access-2wkr2\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.119965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120035 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-ovndb-tls-certs\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120071 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnlfj\" (UniqueName: \"kubernetes.io/projected/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-kube-api-access-jnlfj\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120119 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-config\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120148 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wkr2\" (UniqueName: \"kubernetes.io/projected/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-kube-api-access-2wkr2\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120180 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120204 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-combined-ca-bundle\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120234 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120264 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120289 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-config\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120338 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-httpd-config\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.120915 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.121296 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-config\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.122087 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.122744 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.123403 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-svc\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.143479 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wkr2\" (UniqueName: \"kubernetes.io/projected/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-kube-api-access-2wkr2\") pod \"dnsmasq-dns-55f844cf75-lqdmt\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.190912 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.222108 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-httpd-config\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.222224 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-ovndb-tls-certs\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.222263 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnlfj\" (UniqueName: \"kubernetes.io/projected/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-kube-api-access-jnlfj\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.222338 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-combined-ca-bundle\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.222381 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-config\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.226304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-config\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.227006 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-ovndb-tls-certs\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.228853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-httpd-config\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.233984 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-combined-ca-bundle\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.265192 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnlfj\" (UniqueName: \"kubernetes.io/projected/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-kube-api-access-jnlfj\") pod \"neutron-64d655c956-kd82z\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.356340 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:28 crc kubenswrapper[4688]: I1125 12:32:28.755199 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fa03a9-0da8-4141-a2af-84c5e8f2554c" path="/var/lib/kubelet/pods/89fa03a9-0da8-4141-a2af-84c5e8f2554c/volumes" Nov 25 12:32:29 crc kubenswrapper[4688]: I1125 12:32:29.670483 4688 generic.go:334] "Generic (PLEG): container finished" podID="0055b409-5571-400e-a4c1-46a58c368692" containerID="dbf2bb8018c64875e0527dce58759f061f63b22162fe113d611571fd5be820bc" exitCode=0 Nov 25 12:32:29 crc kubenswrapper[4688]: I1125 12:32:29.670755 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8nw4p" event={"ID":"0055b409-5571-400e-a4c1-46a58c368692","Type":"ContainerDied","Data":"dbf2bb8018c64875e0527dce58759f061f63b22162fe113d611571fd5be820bc"} Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.192864 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f854495df-t6szb"] Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.194193 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.196030 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.196258 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.209162 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f854495df-t6szb"] Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.366933 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-ovndb-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.367279 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-public-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.367326 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-httpd-config\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.367362 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc88l\" (UniqueName: \"kubernetes.io/projected/afe7afb2-5157-4e1b-964f-c402acb02765-kube-api-access-fc88l\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.367390 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-internal-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.367471 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-config\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.367514 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-combined-ca-bundle\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.469286 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-internal-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.469385 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-config\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.469427 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-combined-ca-bundle\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.469450 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-ovndb-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.469476 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-public-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.469508 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-httpd-config\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.469555 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc88l\" (UniqueName: \"kubernetes.io/projected/afe7afb2-5157-4e1b-964f-c402acb02765-kube-api-access-fc88l\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.473918 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-public-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.474447 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-internal-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.475012 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-config\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.475228 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-httpd-config\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.481448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-ovndb-tls-certs\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.491122 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe7afb2-5157-4e1b-964f-c402acb02765-combined-ca-bundle\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.492187 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc88l\" (UniqueName: \"kubernetes.io/projected/afe7afb2-5157-4e1b-964f-c402acb02765-kube-api-access-fc88l\") pod \"neutron-f854495df-t6szb\" (UID: \"afe7afb2-5157-4e1b-964f-c402acb02765\") " pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:30 crc kubenswrapper[4688]: I1125 12:32:30.516892 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.277141 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.394389 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-fernet-keys\") pod \"0055b409-5571-400e-a4c1-46a58c368692\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.394843 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkhg\" (UniqueName: \"kubernetes.io/projected/0055b409-5571-400e-a4c1-46a58c368692-kube-api-access-wxkhg\") pod \"0055b409-5571-400e-a4c1-46a58c368692\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.394953 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-combined-ca-bundle\") pod \"0055b409-5571-400e-a4c1-46a58c368692\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.395049 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-config-data\") pod \"0055b409-5571-400e-a4c1-46a58c368692\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.395091 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-credential-keys\") pod \"0055b409-5571-400e-a4c1-46a58c368692\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.395157 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-scripts\") pod \"0055b409-5571-400e-a4c1-46a58c368692\" (UID: \"0055b409-5571-400e-a4c1-46a58c368692\") " Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.401894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0055b409-5571-400e-a4c1-46a58c368692" (UID: "0055b409-5571-400e-a4c1-46a58c368692"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.405002 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0055b409-5571-400e-a4c1-46a58c368692-kube-api-access-wxkhg" (OuterVolumeSpecName: "kube-api-access-wxkhg") pod "0055b409-5571-400e-a4c1-46a58c368692" (UID: "0055b409-5571-400e-a4c1-46a58c368692"). InnerVolumeSpecName "kube-api-access-wxkhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.405157 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-scripts" (OuterVolumeSpecName: "scripts") pod "0055b409-5571-400e-a4c1-46a58c368692" (UID: "0055b409-5571-400e-a4c1-46a58c368692"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.409601 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0055b409-5571-400e-a4c1-46a58c368692" (UID: "0055b409-5571-400e-a4c1-46a58c368692"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.444056 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0055b409-5571-400e-a4c1-46a58c368692" (UID: "0055b409-5571-400e-a4c1-46a58c368692"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.444321 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-config-data" (OuterVolumeSpecName: "config-data") pod "0055b409-5571-400e-a4c1-46a58c368692" (UID: "0055b409-5571-400e-a4c1-46a58c368692"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.503283 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.503320 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkhg\" (UniqueName: \"kubernetes.io/projected/0055b409-5571-400e-a4c1-46a58c368692-kube-api-access-wxkhg\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.503330 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.503340 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.503349 4688 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.503357 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0055b409-5571-400e-a4c1-46a58c368692-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.696323 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rzvvs" event={"ID":"5b031476-5e95-46a9-8774-4073f647cb7a","Type":"ContainerStarted","Data":"8e6a890bb14409a3517aed1593aece4db6c1f97e8b1cd44b677e0908070d1c2e"} Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.705502 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8nw4p" event={"ID":"0055b409-5571-400e-a4c1-46a58c368692","Type":"ContainerDied","Data":"f600a4ce9e9e28a9bc0fbcbf4f2f500be7eca62ea1e2202c5767d3348b1a16f3"} Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.705583 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f600a4ce9e9e28a9bc0fbcbf4f2f500be7eca62ea1e2202c5767d3348b1a16f3" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.705755 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8nw4p" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.738409 4688 generic.go:334] "Generic (PLEG): container finished" podID="05379dbe-faf8-4ac1-a032-40f31cb4e457" containerID="f5fce7caed6d2ee0655eab95b4631fce3b28f32bba732ab8c03a01cebf6c7d79" exitCode=0 Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.738514 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-crb7s" event={"ID":"05379dbe-faf8-4ac1-a032-40f31cb4e457","Type":"ContainerDied","Data":"f5fce7caed6d2ee0655eab95b4631fce3b28f32bba732ab8c03a01cebf6c7d79"} Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.739981 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rzvvs" podStartSLOduration=2.417408394 podStartE2EDuration="41.739957163s" podCreationTimestamp="2025-11-25 12:31:50 +0000 UTC" firstStartedPulling="2025-11-25 12:31:52.007250908 +0000 UTC m=+1062.116879776" lastFinishedPulling="2025-11-25 12:32:31.329799677 +0000 UTC m=+1101.439428545" observedRunningTime="2025-11-25 12:32:31.713180514 +0000 UTC m=+1101.822809382" watchObservedRunningTime="2025-11-25 12:32:31.739957163 +0000 UTC m=+1101.849586031" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.752796 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2611fb13-90fc-4310-a8dc-c224f4689a9f","Type":"ContainerStarted","Data":"5220104024f1355ff0f31e1e5cbe600e94d61aa589d0fab622dc461633654b35"} Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.789836 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-544b4d8674-8x8rj"] Nov 25 12:32:31 crc kubenswrapper[4688]: E1125 12:32:31.790321 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0055b409-5571-400e-a4c1-46a58c368692" containerName="keystone-bootstrap" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.790385 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0055b409-5571-400e-a4c1-46a58c368692" containerName="keystone-bootstrap" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.790638 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0055b409-5571-400e-a4c1-46a58c368692" containerName="keystone-bootstrap" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.792587 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.795366 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-544b4d8674-8x8rj"] Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.795945 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.796266 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.796673 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.796966 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.797150 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.797292 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f7h8n" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.821425 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-combined-ca-bundle\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.821653 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npg9s\" (UniqueName: \"kubernetes.io/projected/aaf46f92-56fe-402e-831c-7641bd8dc3d2-kube-api-access-npg9s\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.821875 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-fernet-keys\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.821929 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-internal-tls-certs\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.821999 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-config-data\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.831667 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-scripts\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.831733 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-credential-keys\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.831772 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-public-tls-certs\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.871958 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lqdmt"] Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.924451 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.937323 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-fernet-keys\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.937399 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-internal-tls-certs\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.937449 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-config-data\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.937511 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-scripts\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.937594 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-credential-keys\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.937619 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-public-tls-certs\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.937666 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-combined-ca-bundle\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.937728 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npg9s\" (UniqueName: \"kubernetes.io/projected/aaf46f92-56fe-402e-831c-7641bd8dc3d2-kube-api-access-npg9s\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.942165 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-config-data\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.943762 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-combined-ca-bundle\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.944753 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-fernet-keys\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.946837 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-scripts\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: W1125 12:32:31.946850 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode00027ea_a0b8_4406_bb9f_5583cbec970f.slice/crio-7a17f6459efdff3be73c6a636f407449b36e1274fc6bebedc7fd4657b25a0a5d WatchSource:0}: Error finding container 7a17f6459efdff3be73c6a636f407449b36e1274fc6bebedc7fd4657b25a0a5d: Status 404 returned error can't find the container with id 7a17f6459efdff3be73c6a636f407449b36e1274fc6bebedc7fd4657b25a0a5d Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.947053 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-internal-tls-certs\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.949037 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-credential-keys\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.950350 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaf46f92-56fe-402e-831c-7641bd8dc3d2-public-tls-certs\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.968910 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npg9s\" (UniqueName: \"kubernetes.io/projected/aaf46f92-56fe-402e-831c-7641bd8dc3d2-kube-api-access-npg9s\") pod \"keystone-544b4d8674-8x8rj\" (UID: \"aaf46f92-56fe-402e-831c-7641bd8dc3d2\") " pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:31 crc kubenswrapper[4688]: I1125 12:32:31.989580 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f854495df-t6szb"] Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.133103 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.668026 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-544b4d8674-8x8rj"] Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.800869 4688 generic.go:334] "Generic (PLEG): container finished" podID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" containerID="b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c" exitCode=0 Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.800989 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" event={"ID":"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0","Type":"ContainerDied","Data":"b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.801020 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" event={"ID":"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0","Type":"ContainerStarted","Data":"a316acc489624a1e98ef6f61cfdf18ffebaf4dbf2cf992bebcadfe636a9bb043"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.837761 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-544b4d8674-8x8rj" event={"ID":"aaf46f92-56fe-402e-831c-7641bd8dc3d2","Type":"ContainerStarted","Data":"44454967d2569616e9783bbb86c5b77ff6dbcb700167c6b6e74b23bff98223a9"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.842060 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f854495df-t6szb" event={"ID":"afe7afb2-5157-4e1b-964f-c402acb02765","Type":"ContainerStarted","Data":"f26498968cd8eb94e31b7a2393d368db23f3e4f7d95d7730ab24263e322bd5c2"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.842095 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f854495df-t6szb" event={"ID":"afe7afb2-5157-4e1b-964f-c402acb02765","Type":"ContainerStarted","Data":"0da30167bc394c6fab6960f03d84723448287f00e1dc4ae1fc8cdb19731508d8"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.842106 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f854495df-t6szb" event={"ID":"afe7afb2-5157-4e1b-964f-c402acb02765","Type":"ContainerStarted","Data":"ebb862db153eb4df04c526bd2470c14815f6c976eb69881dd2cc66ee6096eb9c"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.842748 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f854495df-t6szb" Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.853171 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b276d21b-cfa5-4b99-98f4-c75e85233b0c","Type":"ContainerStarted","Data":"a3b0b44a43dbc5e51249385c60e70fb4e3c5df01e8a46971467cdb0d490a491a"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.873905 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e00027ea-a0b8-4406-bb9f-5583cbec970f","Type":"ContainerStarted","Data":"1ce3730692da953529a011ec8e45e6f64ac03aa179f7bf9e77ed09d1cfacb979"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.873946 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e00027ea-a0b8-4406-bb9f-5583cbec970f","Type":"ContainerStarted","Data":"7a17f6459efdff3be73c6a636f407449b36e1274fc6bebedc7fd4657b25a0a5d"} Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.878684 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f854495df-t6szb" podStartSLOduration=2.878663627 podStartE2EDuration="2.878663627s" podCreationTimestamp="2025-11-25 12:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:32.861451284 +0000 UTC m=+1102.971080152" watchObservedRunningTime="2025-11-25 12:32:32.878663627 +0000 UTC m=+1102.988292485" Nov 25 12:32:32 crc kubenswrapper[4688]: I1125 12:32:32.919998 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.919974056000001 podStartE2EDuration="8.919974056s" podCreationTimestamp="2025-11-25 12:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:32.894979694 +0000 UTC m=+1103.004608562" watchObservedRunningTime="2025-11-25 12:32:32.919974056 +0000 UTC m=+1103.029602934" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.035015 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64d655c956-kd82z"] Nov 25 12:32:33 crc kubenswrapper[4688]: W1125 12:32:33.065846 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d0462a2_2ad3_4b7a_9092_31fa724fb4ee.slice/crio-ccc1ea27ec67a65fefacd2c55ac451b4a870cd9ae8b90bd9c3278e2985a0cfa3 WatchSource:0}: Error finding container ccc1ea27ec67a65fefacd2c55ac451b4a870cd9ae8b90bd9c3278e2985a0cfa3: Status 404 returned error can't find the container with id ccc1ea27ec67a65fefacd2c55ac451b4a870cd9ae8b90bd9c3278e2985a0cfa3 Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.249380 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-crb7s" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.278452 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-combined-ca-bundle\") pod \"05379dbe-faf8-4ac1-a032-40f31cb4e457\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.278641 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxhq7\" (UniqueName: \"kubernetes.io/projected/05379dbe-faf8-4ac1-a032-40f31cb4e457-kube-api-access-lxhq7\") pod \"05379dbe-faf8-4ac1-a032-40f31cb4e457\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.278826 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-db-sync-config-data\") pod \"05379dbe-faf8-4ac1-a032-40f31cb4e457\" (UID: \"05379dbe-faf8-4ac1-a032-40f31cb4e457\") " Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.285266 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05379dbe-faf8-4ac1-a032-40f31cb4e457-kube-api-access-lxhq7" (OuterVolumeSpecName: "kube-api-access-lxhq7") pod "05379dbe-faf8-4ac1-a032-40f31cb4e457" (UID: "05379dbe-faf8-4ac1-a032-40f31cb4e457"). InnerVolumeSpecName "kube-api-access-lxhq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.285412 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "05379dbe-faf8-4ac1-a032-40f31cb4e457" (UID: "05379dbe-faf8-4ac1-a032-40f31cb4e457"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.323890 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05379dbe-faf8-4ac1-a032-40f31cb4e457" (UID: "05379dbe-faf8-4ac1-a032-40f31cb4e457"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.381201 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.381253 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxhq7\" (UniqueName: \"kubernetes.io/projected/05379dbe-faf8-4ac1-a032-40f31cb4e457-kube-api-access-lxhq7\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.381278 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05379dbe-faf8-4ac1-a032-40f31cb4e457-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.900166 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-crb7s" event={"ID":"05379dbe-faf8-4ac1-a032-40f31cb4e457","Type":"ContainerDied","Data":"dd6b7c6cf4e95fe28a966419461f86dcb392b53f82af8934817bedfc5492132f"} Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.900629 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd6b7c6cf4e95fe28a966419461f86dcb392b53f82af8934817bedfc5492132f" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.900779 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-crb7s" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.915878 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e00027ea-a0b8-4406-bb9f-5583cbec970f","Type":"ContainerStarted","Data":"faa2eb11f3733696f3fc304f2b05d9101fe6abc330c685ea311d364f9d3bdbd9"} Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.927641 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" event={"ID":"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0","Type":"ContainerStarted","Data":"dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361"} Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.928102 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.957269 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-544b4d8674-8x8rj" event={"ID":"aaf46f92-56fe-402e-831c-7641bd8dc3d2","Type":"ContainerStarted","Data":"603c64bab209e476b8df1666f19cc083477a82ea37c78e707365ad6aef764698"} Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.958209 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.965899 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.965880106 podStartE2EDuration="7.965880106s" podCreationTimestamp="2025-11-25 12:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:33.959421973 +0000 UTC m=+1104.069050841" watchObservedRunningTime="2025-11-25 12:32:33.965880106 +0000 UTC m=+1104.075508974" Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.985582 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d655c956-kd82z" event={"ID":"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee","Type":"ContainerStarted","Data":"63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97"} Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.985858 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d655c956-kd82z" event={"ID":"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee","Type":"ContainerStarted","Data":"ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1"} Nov 25 12:32:33 crc kubenswrapper[4688]: I1125 12:32:33.985947 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d655c956-kd82z" event={"ID":"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee","Type":"ContainerStarted","Data":"ccc1ea27ec67a65fefacd2c55ac451b4a870cd9ae8b90bd9c3278e2985a0cfa3"} Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.000212 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-544b4d8674-8x8rj" podStartSLOduration=3.000192938 podStartE2EDuration="3.000192938s" podCreationTimestamp="2025-11-25 12:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:33.996635502 +0000 UTC m=+1104.106264370" watchObservedRunningTime="2025-11-25 12:32:34.000192938 +0000 UTC m=+1104.109821806" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.160743 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" podStartSLOduration=7.160721729 podStartE2EDuration="7.160721729s" podCreationTimestamp="2025-11-25 12:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:34.137047903 +0000 UTC m=+1104.246676781" watchObservedRunningTime="2025-11-25 12:32:34.160721729 +0000 UTC m=+1104.270350607" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.161868 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-75586f5599-pcl94"] Nov 25 12:32:34 crc kubenswrapper[4688]: E1125 12:32:34.162328 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05379dbe-faf8-4ac1-a032-40f31cb4e457" containerName="barbican-db-sync" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.162345 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="05379dbe-faf8-4ac1-a032-40f31cb4e457" containerName="barbican-db-sync" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.162536 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="05379dbe-faf8-4ac1-a032-40f31cb4e457" containerName="barbican-db-sync" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.163454 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.169918 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gx949" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.170118 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.170885 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.197246 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b7fdfcd-rxxwx"] Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.198764 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.209715 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.223654 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-75586f5599-pcl94"] Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.239213 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b7fdfcd-rxxwx"] Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.311980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-config-data-custom\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.312287 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.312371 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-config-data\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.312456 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ntl\" (UniqueName: \"kubernetes.io/projected/414e5c21-70ba-42cc-b382-558d0c95a1ea-kube-api-access-l8ntl\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.312548 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/414e5c21-70ba-42cc-b382-558d0c95a1ea-logs\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.312680 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zvbg\" (UniqueName: \"kubernetes.io/projected/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-kube-api-access-5zvbg\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.312762 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-logs\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.312834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-config-data-custom\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.312914 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-config-data\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.313008 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-combined-ca-bundle\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.337472 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lqdmt"] Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.414833 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-logs\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.414880 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-config-data-custom\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.414917 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-config-data\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.415025 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-combined-ca-bundle\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.415063 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-config-data-custom\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.415108 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.415125 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-config-data\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.415156 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ntl\" (UniqueName: \"kubernetes.io/projected/414e5c21-70ba-42cc-b382-558d0c95a1ea-kube-api-access-l8ntl\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.415170 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/414e5c21-70ba-42cc-b382-558d0c95a1ea-logs\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.415247 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zvbg\" (UniqueName: \"kubernetes.io/projected/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-kube-api-access-5zvbg\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.416137 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-logs\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.418038 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/414e5c21-70ba-42cc-b382-558d0c95a1ea-logs\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.426456 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-combined-ca-bundle\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.429122 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.431648 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-config-data\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.440767 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-config-data-custom\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.441606 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ntl\" (UniqueName: \"kubernetes.io/projected/414e5c21-70ba-42cc-b382-558d0c95a1ea-kube-api-access-l8ntl\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.450281 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zvbg\" (UniqueName: \"kubernetes.io/projected/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-kube-api-access-5zvbg\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.453225 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edbc7ba6-1fa5-418d-a639-3b88eee1c4fb-config-data-custom\") pod \"barbican-worker-75586f5599-pcl94\" (UID: \"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb\") " pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.470873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/414e5c21-70ba-42cc-b382-558d0c95a1ea-config-data\") pod \"barbican-keystone-listener-6b7fdfcd-rxxwx\" (UID: \"414e5c21-70ba-42cc-b382-558d0c95a1ea\") " pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.474925 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59c48fd54-hnlcs"] Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.477118 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.481223 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.499300 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ljt6t"] Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.501205 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75586f5599-pcl94" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.503501 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.517090 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c48fd54-hnlcs"] Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.535901 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ljt6t"] Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.539444 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.618489 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r52c2\" (UniqueName: \"kubernetes.io/projected/cb7419cf-e7db-4199-9ffa-426db08c3b43-kube-api-access-r52c2\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.618566 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.618615 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.618656 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.618813 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-config\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.618886 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-combined-ca-bundle\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.618920 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80800696-d4a7-436c-82b5-0f627dd289db-logs\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.618975 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.619019 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.619054 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdgr\" (UniqueName: \"kubernetes.io/projected/80800696-d4a7-436c-82b5-0f627dd289db-kube-api-access-fhdgr\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.619908 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data-custom\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.721947 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhdgr\" (UniqueName: \"kubernetes.io/projected/80800696-d4a7-436c-82b5-0f627dd289db-kube-api-access-fhdgr\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data-custom\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722066 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r52c2\" (UniqueName: \"kubernetes.io/projected/cb7419cf-e7db-4199-9ffa-426db08c3b43-kube-api-access-r52c2\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722085 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722115 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722142 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722203 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-config\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-combined-ca-bundle\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722250 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80800696-d4a7-436c-82b5-0f627dd289db-logs\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722278 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.722302 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.727312 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.728466 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80800696-d4a7-436c-82b5-0f627dd289db-logs\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.737435 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.738541 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-config\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.738789 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.738855 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.742313 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.743502 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data-custom\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.746155 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r52c2\" (UniqueName: \"kubernetes.io/projected/cb7419cf-e7db-4199-9ffa-426db08c3b43-kube-api-access-r52c2\") pod \"dnsmasq-dns-85ff748b95-ljt6t\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.756675 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhdgr\" (UniqueName: \"kubernetes.io/projected/80800696-d4a7-436c-82b5-0f627dd289db-kube-api-access-fhdgr\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.758781 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-combined-ca-bundle\") pod \"barbican-api-59c48fd54-hnlcs\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.873422 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.894890 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:34 crc kubenswrapper[4688]: I1125 12:32:34.996744 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:35 crc kubenswrapper[4688]: I1125 12:32:35.031082 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64d655c956-kd82z" podStartSLOduration=8.031052584 podStartE2EDuration="8.031052584s" podCreationTimestamp="2025-11-25 12:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:35.024060336 +0000 UTC m=+1105.133689204" watchObservedRunningTime="2025-11-25 12:32:35.031052584 +0000 UTC m=+1105.140681452" Nov 25 12:32:35 crc kubenswrapper[4688]: I1125 12:32:35.361167 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 12:32:35 crc kubenswrapper[4688]: I1125 12:32:35.361226 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 12:32:35 crc kubenswrapper[4688]: I1125 12:32:35.398544 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 12:32:35 crc kubenswrapper[4688]: I1125 12:32:35.408643 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.018730 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" podUID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" containerName="dnsmasq-dns" containerID="cri-o://dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361" gracePeriod=10 Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.022625 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.022681 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.272019 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b7fdfcd-rxxwx"] Nov 25 12:32:36 crc kubenswrapper[4688]: W1125 12:32:36.283259 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod414e5c21_70ba_42cc_b382_558d0c95a1ea.slice/crio-0d01fd8a0b364eb482ff2374b72dcb9778fe79223328f3d57ee662849f00c505 WatchSource:0}: Error finding container 0d01fd8a0b364eb482ff2374b72dcb9778fe79223328f3d57ee662849f00c505: Status 404 returned error can't find the container with id 0d01fd8a0b364eb482ff2374b72dcb9778fe79223328f3d57ee662849f00c505 Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.320984 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-75586f5599-pcl94"] Nov 25 12:32:36 crc kubenswrapper[4688]: W1125 12:32:36.337293 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedbc7ba6_1fa5_418d_a639_3b88eee1c4fb.slice/crio-4e88fc86698474c8b65908a555a2a5d5d6e836763d730a5151c9304b06207d95 WatchSource:0}: Error finding container 4e88fc86698474c8b65908a555a2a5d5d6e836763d730a5151c9304b06207d95: Status 404 returned error can't find the container with id 4e88fc86698474c8b65908a555a2a5d5d6e836763d730a5151c9304b06207d95 Nov 25 12:32:36 crc kubenswrapper[4688]: E1125 12:32:36.387694 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b031476_5e95_46a9_8774_4073f647cb7a.slice/crio-conmon-8e6a890bb14409a3517aed1593aece4db6c1f97e8b1cd44b677e0908070d1c2e.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.425478 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ljt6t"] Nov 25 12:32:36 crc kubenswrapper[4688]: W1125 12:32:36.429930 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb7419cf_e7db_4199_9ffa_426db08c3b43.slice/crio-4425fb9261fd6033ed908c95ff0444050bd5ba3834422c9aa8c82c00995ea793 WatchSource:0}: Error finding container 4425fb9261fd6033ed908c95ff0444050bd5ba3834422c9aa8c82c00995ea793: Status 404 returned error can't find the container with id 4425fb9261fd6033ed908c95ff0444050bd5ba3834422c9aa8c82c00995ea793 Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.517571 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c48fd54-hnlcs"] Nov 25 12:32:36 crc kubenswrapper[4688]: W1125 12:32:36.535187 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80800696_d4a7_436c_82b5_0f627dd289db.slice/crio-4983496bb9ed718d9ff2c1ad78b03c8eafef938207012fc918728b30efcf9f79 WatchSource:0}: Error finding container 4983496bb9ed718d9ff2c1ad78b03c8eafef938207012fc918728b30efcf9f79: Status 404 returned error can't find the container with id 4983496bb9ed718d9ff2c1ad78b03c8eafef938207012fc918728b30efcf9f79 Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.568799 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.673483 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-nb\") pod \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.673591 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-svc\") pod \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.673730 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-swift-storage-0\") pod \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.673773 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-config\") pod \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.673880 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wkr2\" (UniqueName: \"kubernetes.io/projected/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-kube-api-access-2wkr2\") pod \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.674055 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-sb\") pod \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\" (UID: \"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0\") " Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.682558 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-kube-api-access-2wkr2" (OuterVolumeSpecName: "kube-api-access-2wkr2") pod "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" (UID: "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0"). InnerVolumeSpecName "kube-api-access-2wkr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.776047 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wkr2\" (UniqueName: \"kubernetes.io/projected/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-kube-api-access-2wkr2\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.813056 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" (UID: "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.815090 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" (UID: "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.820213 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" (UID: "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.826489 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" (UID: "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.839039 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-config" (OuterVolumeSpecName: "config") pod "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" (UID: "f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.878757 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.878793 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.878809 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.878823 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:36 crc kubenswrapper[4688]: I1125 12:32:36.878836 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.045068 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c48fd54-hnlcs" event={"ID":"80800696-d4a7-436c-82b5-0f627dd289db","Type":"ContainerStarted","Data":"8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.045145 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c48fd54-hnlcs" event={"ID":"80800696-d4a7-436c-82b5-0f627dd289db","Type":"ContainerStarted","Data":"4983496bb9ed718d9ff2c1ad78b03c8eafef938207012fc918728b30efcf9f79"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.051194 4688 generic.go:334] "Generic (PLEG): container finished" podID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" containerID="dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361" exitCode=0 Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.051264 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" event={"ID":"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0","Type":"ContainerDied","Data":"dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.051323 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" event={"ID":"f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0","Type":"ContainerDied","Data":"a316acc489624a1e98ef6f61cfdf18ffebaf4dbf2cf992bebcadfe636a9bb043"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.051345 4688 scope.go:117] "RemoveContainer" containerID="dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.051489 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-lqdmt" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.058515 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb7419cf-e7db-4199-9ffa-426db08c3b43" containerID="b058e397b490af581d8f1b9bb7fd2d328e501b3c4d4a3b7c68624b4d8b0a6aed" exitCode=0 Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.058593 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" event={"ID":"cb7419cf-e7db-4199-9ffa-426db08c3b43","Type":"ContainerDied","Data":"b058e397b490af581d8f1b9bb7fd2d328e501b3c4d4a3b7c68624b4d8b0a6aed"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.058617 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" event={"ID":"cb7419cf-e7db-4199-9ffa-426db08c3b43","Type":"ContainerStarted","Data":"4425fb9261fd6033ed908c95ff0444050bd5ba3834422c9aa8c82c00995ea793"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.064179 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75586f5599-pcl94" event={"ID":"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb","Type":"ContainerStarted","Data":"4e88fc86698474c8b65908a555a2a5d5d6e836763d730a5151c9304b06207d95"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.069011 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" event={"ID":"414e5c21-70ba-42cc-b382-558d0c95a1ea","Type":"ContainerStarted","Data":"0d01fd8a0b364eb482ff2374b72dcb9778fe79223328f3d57ee662849f00c505"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.077132 4688 generic.go:334] "Generic (PLEG): container finished" podID="5b031476-5e95-46a9-8774-4073f647cb7a" containerID="8e6a890bb14409a3517aed1593aece4db6c1f97e8b1cd44b677e0908070d1c2e" exitCode=0 Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.078507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rzvvs" event={"ID":"5b031476-5e95-46a9-8774-4073f647cb7a","Type":"ContainerDied","Data":"8e6a890bb14409a3517aed1593aece4db6c1f97e8b1cd44b677e0908070d1c2e"} Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.115944 4688 scope.go:117] "RemoveContainer" containerID="b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.173754 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lqdmt"] Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.187703 4688 scope.go:117] "RemoveContainer" containerID="dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361" Nov 25 12:32:37 crc kubenswrapper[4688]: E1125 12:32:37.188232 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361\": container with ID starting with dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361 not found: ID does not exist" containerID="dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.188349 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361"} err="failed to get container status \"dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361\": rpc error: code = NotFound desc = could not find container \"dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361\": container with ID starting with dcff8868d9b836435061e939731bdbc56fed11a2f512af538b68d0ae5864a361 not found: ID does not exist" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.188441 4688 scope.go:117] "RemoveContainer" containerID="b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c" Nov 25 12:32:37 crc kubenswrapper[4688]: E1125 12:32:37.189444 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c\": container with ID starting with b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c not found: ID does not exist" containerID="b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.189484 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c"} err="failed to get container status \"b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c\": rpc error: code = NotFound desc = could not find container \"b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c\": container with ID starting with b0fc611de6120925dfa4e6e1a20ab324cca35ff9793e4a53513bcc5db497e33c not found: ID does not exist" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.190790 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-lqdmt"] Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.287968 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c8499f65b-2b8f7"] Nov 25 12:32:37 crc kubenswrapper[4688]: E1125 12:32:37.289989 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" containerName="dnsmasq-dns" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.290014 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" containerName="dnsmasq-dns" Nov 25 12:32:37 crc kubenswrapper[4688]: E1125 12:32:37.290084 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" containerName="init" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.290093 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" containerName="init" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.290796 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" containerName="dnsmasq-dns" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.293050 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.300859 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.301339 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.316700 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c8499f65b-2b8f7"] Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.395169 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a7c7991-8c0b-481a-81f1-62119d1d47e5-logs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.395252 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbln\" (UniqueName: \"kubernetes.io/projected/0a7c7991-8c0b-481a-81f1-62119d1d47e5-kube-api-access-pwbln\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.395282 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-internal-tls-certs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.395360 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-combined-ca-bundle\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.395411 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-config-data-custom\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.395433 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-public-tls-certs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.395484 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-config-data\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.497297 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-config-data\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.497441 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a7c7991-8c0b-481a-81f1-62119d1d47e5-logs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.497476 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbln\" (UniqueName: \"kubernetes.io/projected/0a7c7991-8c0b-481a-81f1-62119d1d47e5-kube-api-access-pwbln\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.497496 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-internal-tls-certs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.497566 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-combined-ca-bundle\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.497603 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-config-data-custom\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.497620 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-public-tls-certs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.499040 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a7c7991-8c0b-481a-81f1-62119d1d47e5-logs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.500388 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.500764 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.503917 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-config-data-custom\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.504380 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-internal-tls-certs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.504498 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-public-tls-certs\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.505790 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-config-data\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.508025 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7c7991-8c0b-481a-81f1-62119d1d47e5-combined-ca-bundle\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.519359 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbln\" (UniqueName: \"kubernetes.io/projected/0a7c7991-8c0b-481a-81f1-62119d1d47e5-kube-api-access-pwbln\") pod \"barbican-api-c8499f65b-2b8f7\" (UID: \"0a7c7991-8c0b-481a-81f1-62119d1d47e5\") " pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.587584 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.635146 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:37 crc kubenswrapper[4688]: I1125 12:32:37.697566 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.093796 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" event={"ID":"cb7419cf-e7db-4199-9ffa-426db08c3b43","Type":"ContainerStarted","Data":"1700eb6e04dcc5607550ab26263f99417b3aa0b8dac371f13dab0b251c3dbb67"} Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.094183 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.112345 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.113552 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c48fd54-hnlcs" event={"ID":"80800696-d4a7-436c-82b5-0f627dd289db","Type":"ContainerStarted","Data":"d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1"} Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.113857 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.114005 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.116049 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.116087 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.157326 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59c48fd54-hnlcs" podStartSLOduration=4.157303388 podStartE2EDuration="4.157303388s" podCreationTimestamp="2025-11-25 12:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:38.156466485 +0000 UTC m=+1108.266095353" watchObservedRunningTime="2025-11-25 12:32:38.157303388 +0000 UTC m=+1108.266932256" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.163806 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" podStartSLOduration=4.163789511 podStartE2EDuration="4.163789511s" podCreationTimestamp="2025-11-25 12:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:38.113421699 +0000 UTC m=+1108.223050567" watchObservedRunningTime="2025-11-25 12:32:38.163789511 +0000 UTC m=+1108.273418379" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.754644 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0" path="/var/lib/kubelet/pods/f0f3e9e7-6b0f-4a98-a0bb-b72beebb28b0/volumes" Nov 25 12:32:38 crc kubenswrapper[4688]: I1125 12:32:38.788055 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.082000 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rzvvs" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.141550 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-config-data\") pod \"5b031476-5e95-46a9-8774-4073f647cb7a\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.141634 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-combined-ca-bundle\") pod \"5b031476-5e95-46a9-8774-4073f647cb7a\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.141741 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b031476-5e95-46a9-8774-4073f647cb7a-logs\") pod \"5b031476-5e95-46a9-8774-4073f647cb7a\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.141828 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-scripts\") pod \"5b031476-5e95-46a9-8774-4073f647cb7a\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.141913 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbcdb\" (UniqueName: \"kubernetes.io/projected/5b031476-5e95-46a9-8774-4073f647cb7a-kube-api-access-lbcdb\") pod \"5b031476-5e95-46a9-8774-4073f647cb7a\" (UID: \"5b031476-5e95-46a9-8774-4073f647cb7a\") " Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.143880 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b031476-5e95-46a9-8774-4073f647cb7a-logs" (OuterVolumeSpecName: "logs") pod "5b031476-5e95-46a9-8774-4073f647cb7a" (UID: "5b031476-5e95-46a9-8774-4073f647cb7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.167803 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b031476-5e95-46a9-8774-4073f647cb7a-kube-api-access-lbcdb" (OuterVolumeSpecName: "kube-api-access-lbcdb") pod "5b031476-5e95-46a9-8774-4073f647cb7a" (UID: "5b031476-5e95-46a9-8774-4073f647cb7a"). InnerVolumeSpecName "kube-api-access-lbcdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.179855 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-scripts" (OuterVolumeSpecName: "scripts") pod "5b031476-5e95-46a9-8774-4073f647cb7a" (UID: "5b031476-5e95-46a9-8774-4073f647cb7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.183224 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rzvvs" event={"ID":"5b031476-5e95-46a9-8774-4073f647cb7a","Type":"ContainerDied","Data":"0ad0e260ab3ea214e048cb6382b5c44878c7744e549c8ba3ef8f8c9d65af8d0d"} Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.183264 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad0e260ab3ea214e048cb6382b5c44878c7744e549c8ba3ef8f8c9d65af8d0d" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.183328 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rzvvs" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.219543 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.219647 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tgslp" event={"ID":"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7","Type":"ContainerStarted","Data":"24252ac5f580701179af2588ffd109dc96f0ac377e33a8cac235da15003ccdcb"} Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.248745 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b031476-5e95-46a9-8774-4073f647cb7a-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.248774 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.248783 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbcdb\" (UniqueName: \"kubernetes.io/projected/5b031476-5e95-46a9-8774-4073f647cb7a-kube-api-access-lbcdb\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.255215 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-tgslp" podStartSLOduration=3.495404025 podStartE2EDuration="49.255193674s" podCreationTimestamp="2025-11-25 12:31:50 +0000 UTC" firstStartedPulling="2025-11-25 12:31:51.544855679 +0000 UTC m=+1061.654484547" lastFinishedPulling="2025-11-25 12:32:37.304645328 +0000 UTC m=+1107.414274196" observedRunningTime="2025-11-25 12:32:39.247988991 +0000 UTC m=+1109.357617869" watchObservedRunningTime="2025-11-25 12:32:39.255193674 +0000 UTC m=+1109.364822542" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.271290 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-config-data" (OuterVolumeSpecName: "config-data") pod "5b031476-5e95-46a9-8774-4073f647cb7a" (UID: "5b031476-5e95-46a9-8774-4073f647cb7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.281888 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b031476-5e95-46a9-8774-4073f647cb7a" (UID: "5b031476-5e95-46a9-8774-4073f647cb7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.295717 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85ddfb974d-m4b6g"] Nov 25 12:32:39 crc kubenswrapper[4688]: E1125 12:32:39.296162 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b031476-5e95-46a9-8774-4073f647cb7a" containerName="placement-db-sync" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.296180 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b031476-5e95-46a9-8774-4073f647cb7a" containerName="placement-db-sync" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.296417 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b031476-5e95-46a9-8774-4073f647cb7a" containerName="placement-db-sync" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.297610 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.302394 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.302669 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.310245 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ddfb974d-m4b6g"] Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.351072 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-config-data\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.351235 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-combined-ca-bundle\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.351330 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4rx\" (UniqueName: \"kubernetes.io/projected/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-kube-api-access-9w4rx\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.351417 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-internal-tls-certs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.351509 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-public-tls-certs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.351677 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-scripts\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.351925 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-logs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.352023 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.352043 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b031476-5e95-46a9-8774-4073f647cb7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.454055 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-logs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.454504 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-config-data\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.454582 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-combined-ca-bundle\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.454620 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-logs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.454622 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4rx\" (UniqueName: \"kubernetes.io/projected/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-kube-api-access-9w4rx\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.454702 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-internal-tls-certs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.456412 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-public-tls-certs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.456571 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-scripts\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.462355 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-public-tls-certs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.465354 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-config-data\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.466595 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-combined-ca-bundle\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.466933 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-scripts\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.468916 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-internal-tls-certs\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.480020 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4rx\" (UniqueName: \"kubernetes.io/projected/e27f5e13-7857-4d61-bbe1-cb74fb57f7d4-kube-api-access-9w4rx\") pod \"placement-85ddfb974d-m4b6g\" (UID: \"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4\") " pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.491936 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c8499f65b-2b8f7"] Nov 25 12:32:39 crc kubenswrapper[4688]: I1125 12:32:39.677210 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:40 crc kubenswrapper[4688]: I1125 12:32:40.266705 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 12:32:40 crc kubenswrapper[4688]: I1125 12:32:40.943744 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:40 crc kubenswrapper[4688]: I1125 12:32:40.944141 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:32:40 crc kubenswrapper[4688]: I1125 12:32:40.946504 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 12:32:43 crc kubenswrapper[4688]: I1125 12:32:43.280661 4688 generic.go:334] "Generic (PLEG): container finished" podID="ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" containerID="24252ac5f580701179af2588ffd109dc96f0ac377e33a8cac235da15003ccdcb" exitCode=0 Nov 25 12:32:43 crc kubenswrapper[4688]: I1125 12:32:43.280827 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tgslp" event={"ID":"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7","Type":"ContainerDied","Data":"24252ac5f580701179af2588ffd109dc96f0ac377e33a8cac235da15003ccdcb"} Nov 25 12:32:43 crc kubenswrapper[4688]: W1125 12:32:43.613118 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a7c7991_8c0b_481a_81f1_62119d1d47e5.slice/crio-1c69606e950f2b82b6a7daffd002419468587dbf372cec84d054fcb2a23b1d6f WatchSource:0}: Error finding container 1c69606e950f2b82b6a7daffd002419468587dbf372cec84d054fcb2a23b1d6f: Status 404 returned error can't find the container with id 1c69606e950f2b82b6a7daffd002419468587dbf372cec84d054fcb2a23b1d6f Nov 25 12:32:44 crc kubenswrapper[4688]: I1125 12:32:44.304346 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c8499f65b-2b8f7" event={"ID":"0a7c7991-8c0b-481a-81f1-62119d1d47e5","Type":"ContainerStarted","Data":"1c69606e950f2b82b6a7daffd002419468587dbf372cec84d054fcb2a23b1d6f"} Nov 25 12:32:44 crc kubenswrapper[4688]: E1125 12:32:44.454448 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" Nov 25 12:32:44 crc kubenswrapper[4688]: I1125 12:32:44.502418 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ddfb974d-m4b6g"] Nov 25 12:32:44 crc kubenswrapper[4688]: I1125 12:32:44.794101 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tgslp" Nov 25 12:32:44 crc kubenswrapper[4688]: I1125 12:32:44.903975 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:44.994293 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-combined-ca-bundle\") pod \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:44.994504 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nwq2\" (UniqueName: \"kubernetes.io/projected/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-kube-api-access-9nwq2\") pod \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:44.994567 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-config-data\") pod \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\" (UID: \"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.038318 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-kube-api-access-9nwq2" (OuterVolumeSpecName: "kube-api-access-9nwq2") pod "ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" (UID: "ff6fe51c-f968-4dd0-93c2-b355ac6c27c7"). InnerVolumeSpecName "kube-api-access-9nwq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.102076 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nwq2\" (UniqueName: \"kubernetes.io/projected/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-kube-api-access-9nwq2\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.106272 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jqj9f"] Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.106584 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" podUID="4e944f85-67c1-480a-8656-e34aba801d33" containerName="dnsmasq-dns" containerID="cri-o://4f864f0266c880a28f16b64e086e0f2f63cda713517b59167308d8ef6e07f1c2" gracePeriod=10 Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.130357 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-config-data" (OuterVolumeSpecName: "config-data") pod "ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" (UID: "ff6fe51c-f968-4dd0-93c2-b355ac6c27c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.160696 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" (UID: "ff6fe51c-f968-4dd0-93c2-b355ac6c27c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.203924 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.203951 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.337747 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2611fb13-90fc-4310-a8dc-c224f4689a9f","Type":"ContainerStarted","Data":"324328d681dacb1433d9618dabba06d5f948bf092d929c8aff682d8748b343fe"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.338134 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="ceilometer-notification-agent" containerID="cri-o://4de5225a1b6ae02445eebb45e3f94dac16393d84450e58aa87720368e4a74838" gracePeriod=30 Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.338270 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="proxy-httpd" containerID="cri-o://324328d681dacb1433d9618dabba06d5f948bf092d929c8aff682d8748b343fe" gracePeriod=30 Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.338332 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="sg-core" containerID="cri-o://5220104024f1355ff0f31e1e5cbe600e94d61aa589d0fab622dc461633654b35" gracePeriod=30 Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.338392 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.357888 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75586f5599-pcl94" event={"ID":"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb","Type":"ContainerStarted","Data":"84d9f3a81845fa2dca27315a87b4498d4f3a47c9ebc9743c09a5a33bc2cc446d"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.357934 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75586f5599-pcl94" event={"ID":"edbc7ba6-1fa5-418d-a639-3b88eee1c4fb","Type":"ContainerStarted","Data":"9cc18d805f257d26bebd610bf3e78e515d3a7a7b61f8d3fd37730fe171863642"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.384720 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" event={"ID":"414e5c21-70ba-42cc-b382-558d0c95a1ea","Type":"ContainerStarted","Data":"324fa669a363251dcaa65b0a77ceb2ddb67755dfff134b98bc5b9bc952819f2c"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.384775 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" event={"ID":"414e5c21-70ba-42cc-b382-558d0c95a1ea","Type":"ContainerStarted","Data":"239087f6729537bce1df1eee661ae0b189fda0a46f5019d3cd396361c844fe40"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.393776 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-75586f5599-pcl94" podStartSLOduration=3.713004325 podStartE2EDuration="11.39375246s" podCreationTimestamp="2025-11-25 12:32:34 +0000 UTC" firstStartedPulling="2025-11-25 12:32:36.339897027 +0000 UTC m=+1106.449525895" lastFinishedPulling="2025-11-25 12:32:44.020645162 +0000 UTC m=+1114.130274030" observedRunningTime="2025-11-25 12:32:45.392910277 +0000 UTC m=+1115.502539145" watchObservedRunningTime="2025-11-25 12:32:45.39375246 +0000 UTC m=+1115.503381328" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.407685 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-slb7z" event={"ID":"588d841f-905c-42bb-9242-2e86b7e66877","Type":"ContainerStarted","Data":"234e8cc0f04be0ad168f3a9effceca6cb04ea5b4326a6c9175811a574e0e0aab"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.419807 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b7fdfcd-rxxwx" podStartSLOduration=3.687535851 podStartE2EDuration="11.41978989s" podCreationTimestamp="2025-11-25 12:32:34 +0000 UTC" firstStartedPulling="2025-11-25 12:32:36.288392183 +0000 UTC m=+1106.398021051" lastFinishedPulling="2025-11-25 12:32:44.020646222 +0000 UTC m=+1114.130275090" observedRunningTime="2025-11-25 12:32:45.418940007 +0000 UTC m=+1115.528568875" watchObservedRunningTime="2025-11-25 12:32:45.41978989 +0000 UTC m=+1115.529418758" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.420598 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ddfb974d-m4b6g" event={"ID":"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4","Type":"ContainerStarted","Data":"0206e5a51c9bd5aa7e2a301dd2aa405ae80e0e6809dbc17dbaf5f6ddb35f307c"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.420664 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ddfb974d-m4b6g" event={"ID":"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4","Type":"ContainerStarted","Data":"721ef35159fcd21e5ac8b2e8f87c283ee6331ff32a10e55d9d3ee4a7cc1f873f"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.438681 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c8499f65b-2b8f7" event={"ID":"0a7c7991-8c0b-481a-81f1-62119d1d47e5","Type":"ContainerStarted","Data":"8d2dc64f1fb98492e73fa424c6c246c7a0e3aee5842d24e95e59291db1c22e80"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.438720 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c8499f65b-2b8f7" event={"ID":"0a7c7991-8c0b-481a-81f1-62119d1d47e5","Type":"ContainerStarted","Data":"10469695f106559b55bfcd9ba1c3cdbc093ed0190adacdbdce13520ca3e95855"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.439209 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.439269 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.445090 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tgslp" event={"ID":"ff6fe51c-f968-4dd0-93c2-b355ac6c27c7","Type":"ContainerDied","Data":"a51d1234c4126a92d1edb92d8dfba869fd68c159863e8de5bbe429f93f6df595"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.445138 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a51d1234c4126a92d1edb92d8dfba869fd68c159863e8de5bbe429f93f6df595" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.445206 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tgslp" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.480900 4688 generic.go:334] "Generic (PLEG): container finished" podID="4e944f85-67c1-480a-8656-e34aba801d33" containerID="4f864f0266c880a28f16b64e086e0f2f63cda713517b59167308d8ef6e07f1c2" exitCode=0 Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.480939 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" event={"ID":"4e944f85-67c1-480a-8656-e34aba801d33","Type":"ContainerDied","Data":"4f864f0266c880a28f16b64e086e0f2f63cda713517b59167308d8ef6e07f1c2"} Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.494660 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-slb7z" podStartSLOduration=3.092838283 podStartE2EDuration="55.49463955s" podCreationTimestamp="2025-11-25 12:31:50 +0000 UTC" firstStartedPulling="2025-11-25 12:31:51.786435457 +0000 UTC m=+1061.896064325" lastFinishedPulling="2025-11-25 12:32:44.188236724 +0000 UTC m=+1114.297865592" observedRunningTime="2025-11-25 12:32:45.437399173 +0000 UTC m=+1115.547028041" watchObservedRunningTime="2025-11-25 12:32:45.49463955 +0000 UTC m=+1115.604268418" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.511688 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c8499f65b-2b8f7" podStartSLOduration=8.511669947 podStartE2EDuration="8.511669947s" podCreationTimestamp="2025-11-25 12:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:45.4614988 +0000 UTC m=+1115.571127668" watchObservedRunningTime="2025-11-25 12:32:45.511669947 +0000 UTC m=+1115.621298815" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.599873 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.722272 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-sb\") pod \"4e944f85-67c1-480a-8656-e34aba801d33\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.722392 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-svc\") pod \"4e944f85-67c1-480a-8656-e34aba801d33\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.722443 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-nb\") pod \"4e944f85-67c1-480a-8656-e34aba801d33\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.722550 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-config\") pod \"4e944f85-67c1-480a-8656-e34aba801d33\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.722585 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-swift-storage-0\") pod \"4e944f85-67c1-480a-8656-e34aba801d33\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.722629 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt959\" (UniqueName: \"kubernetes.io/projected/4e944f85-67c1-480a-8656-e34aba801d33-kube-api-access-lt959\") pod \"4e944f85-67c1-480a-8656-e34aba801d33\" (UID: \"4e944f85-67c1-480a-8656-e34aba801d33\") " Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.781818 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e944f85-67c1-480a-8656-e34aba801d33-kube-api-access-lt959" (OuterVolumeSpecName: "kube-api-access-lt959") pod "4e944f85-67c1-480a-8656-e34aba801d33" (UID: "4e944f85-67c1-480a-8656-e34aba801d33"). InnerVolumeSpecName "kube-api-access-lt959". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.827362 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt959\" (UniqueName: \"kubernetes.io/projected/4e944f85-67c1-480a-8656-e34aba801d33-kube-api-access-lt959\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.916754 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4e944f85-67c1-480a-8656-e34aba801d33" (UID: "4e944f85-67c1-480a-8656-e34aba801d33"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.921052 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e944f85-67c1-480a-8656-e34aba801d33" (UID: "4e944f85-67c1-480a-8656-e34aba801d33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.928986 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.929018 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.943839 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e944f85-67c1-480a-8656-e34aba801d33" (UID: "4e944f85-67c1-480a-8656-e34aba801d33"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.952748 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e944f85-67c1-480a-8656-e34aba801d33" (UID: "4e944f85-67c1-480a-8656-e34aba801d33"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:45 crc kubenswrapper[4688]: I1125 12:32:45.954006 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-config" (OuterVolumeSpecName: "config") pod "4e944f85-67c1-480a-8656-e34aba801d33" (UID: "4e944f85-67c1-480a-8656-e34aba801d33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.031012 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.031047 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.031061 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e944f85-67c1-480a-8656-e34aba801d33-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.112275 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59c48fd54-hnlcs" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.498022 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ddfb974d-m4b6g" event={"ID":"e27f5e13-7857-4d61-bbe1-cb74fb57f7d4","Type":"ContainerStarted","Data":"3551844551edf123f28351c78e8f65147a5072605c1bf0a3d789e41f5c44c01f"} Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.498689 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.498762 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.504668 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" event={"ID":"4e944f85-67c1-480a-8656-e34aba801d33","Type":"ContainerDied","Data":"3728aaa28ab1ad5473403f5d80b63432ef73dc3ddff7e630348d1a09d6572357"} Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.504743 4688 scope.go:117] "RemoveContainer" containerID="4f864f0266c880a28f16b64e086e0f2f63cda713517b59167308d8ef6e07f1c2" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.504969 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jqj9f" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.519249 4688 generic.go:334] "Generic (PLEG): container finished" podID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerID="324328d681dacb1433d9618dabba06d5f948bf092d929c8aff682d8748b343fe" exitCode=0 Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.519301 4688 generic.go:334] "Generic (PLEG): container finished" podID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerID="5220104024f1355ff0f31e1e5cbe600e94d61aa589d0fab622dc461633654b35" exitCode=2 Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.519506 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2611fb13-90fc-4310-a8dc-c224f4689a9f","Type":"ContainerDied","Data":"324328d681dacb1433d9618dabba06d5f948bf092d929c8aff682d8748b343fe"} Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.519562 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2611fb13-90fc-4310-a8dc-c224f4689a9f","Type":"ContainerDied","Data":"5220104024f1355ff0f31e1e5cbe600e94d61aa589d0fab622dc461633654b35"} Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.549215 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85ddfb974d-m4b6g" podStartSLOduration=7.549188433 podStartE2EDuration="7.549188433s" podCreationTimestamp="2025-11-25 12:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:46.543424908 +0000 UTC m=+1116.653053766" watchObservedRunningTime="2025-11-25 12:32:46.549188433 +0000 UTC m=+1116.658817301" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.551484 4688 scope.go:117] "RemoveContainer" containerID="749bad20d0dd7040a60eda6d792dd229e3d794b0bab92f0ce239ef3ca0204f1b" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.572696 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jqj9f"] Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.582697 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jqj9f"] Nov 25 12:32:46 crc kubenswrapper[4688]: E1125 12:32:46.700734 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e944f85_67c1_480a_8656_e34aba801d33.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.758290 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e944f85-67c1-480a-8656-e34aba801d33" path="/var/lib/kubelet/pods/4e944f85-67c1-480a-8656-e34aba801d33/volumes" Nov 25 12:32:46 crc kubenswrapper[4688]: I1125 12:32:46.960324 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:47 crc kubenswrapper[4688]: I1125 12:32:47.117225 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.538927 4688 generic.go:334] "Generic (PLEG): container finished" podID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerID="4de5225a1b6ae02445eebb45e3f94dac16393d84450e58aa87720368e4a74838" exitCode=0 Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.538976 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2611fb13-90fc-4310-a8dc-c224f4689a9f","Type":"ContainerDied","Data":"4de5225a1b6ae02445eebb45e3f94dac16393d84450e58aa87720368e4a74838"} Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.653422 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.790285 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-sg-core-conf-yaml\") pod \"2611fb13-90fc-4310-a8dc-c224f4689a9f\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.790446 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-log-httpd\") pod \"2611fb13-90fc-4310-a8dc-c224f4689a9f\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.790487 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-combined-ca-bundle\") pod \"2611fb13-90fc-4310-a8dc-c224f4689a9f\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.790568 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-scripts\") pod \"2611fb13-90fc-4310-a8dc-c224f4689a9f\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.790591 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-config-data\") pod \"2611fb13-90fc-4310-a8dc-c224f4689a9f\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.790623 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dh6m\" (UniqueName: \"kubernetes.io/projected/2611fb13-90fc-4310-a8dc-c224f4689a9f-kube-api-access-5dh6m\") pod \"2611fb13-90fc-4310-a8dc-c224f4689a9f\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.790658 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-run-httpd\") pod \"2611fb13-90fc-4310-a8dc-c224f4689a9f\" (UID: \"2611fb13-90fc-4310-a8dc-c224f4689a9f\") " Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.791083 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2611fb13-90fc-4310-a8dc-c224f4689a9f" (UID: "2611fb13-90fc-4310-a8dc-c224f4689a9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.791408 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2611fb13-90fc-4310-a8dc-c224f4689a9f" (UID: "2611fb13-90fc-4310-a8dc-c224f4689a9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.791753 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.791779 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2611fb13-90fc-4310-a8dc-c224f4689a9f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.796716 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-scripts" (OuterVolumeSpecName: "scripts") pod "2611fb13-90fc-4310-a8dc-c224f4689a9f" (UID: "2611fb13-90fc-4310-a8dc-c224f4689a9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.796706 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2611fb13-90fc-4310-a8dc-c224f4689a9f-kube-api-access-5dh6m" (OuterVolumeSpecName: "kube-api-access-5dh6m") pod "2611fb13-90fc-4310-a8dc-c224f4689a9f" (UID: "2611fb13-90fc-4310-a8dc-c224f4689a9f"). InnerVolumeSpecName "kube-api-access-5dh6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.820694 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2611fb13-90fc-4310-a8dc-c224f4689a9f" (UID: "2611fb13-90fc-4310-a8dc-c224f4689a9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.856110 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2611fb13-90fc-4310-a8dc-c224f4689a9f" (UID: "2611fb13-90fc-4310-a8dc-c224f4689a9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.866611 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-config-data" (OuterVolumeSpecName: "config-data") pod "2611fb13-90fc-4310-a8dc-c224f4689a9f" (UID: "2611fb13-90fc-4310-a8dc-c224f4689a9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.893500 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.893548 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.893560 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dh6m\" (UniqueName: \"kubernetes.io/projected/2611fb13-90fc-4310-a8dc-c224f4689a9f-kube-api-access-5dh6m\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.893572 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:48 crc kubenswrapper[4688]: I1125 12:32:48.893581 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2611fb13-90fc-4310-a8dc-c224f4689a9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.553750 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2611fb13-90fc-4310-a8dc-c224f4689a9f","Type":"ContainerDied","Data":"0879ffb07d91d2cd05add6977f5f1cab6b672ac8fbe0772d79c85db85e4d5fe4"} Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.553785 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.554205 4688 scope.go:117] "RemoveContainer" containerID="324328d681dacb1433d9618dabba06d5f948bf092d929c8aff682d8748b343fe" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.556673 4688 generic.go:334] "Generic (PLEG): container finished" podID="588d841f-905c-42bb-9242-2e86b7e66877" containerID="234e8cc0f04be0ad168f3a9effceca6cb04ea5b4326a6c9175811a574e0e0aab" exitCode=0 Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.556729 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-slb7z" event={"ID":"588d841f-905c-42bb-9242-2e86b7e66877","Type":"ContainerDied","Data":"234e8cc0f04be0ad168f3a9effceca6cb04ea5b4326a6c9175811a574e0e0aab"} Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.585841 4688 scope.go:117] "RemoveContainer" containerID="5220104024f1355ff0f31e1e5cbe600e94d61aa589d0fab622dc461633654b35" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.605969 4688 scope.go:117] "RemoveContainer" containerID="4de5225a1b6ae02445eebb45e3f94dac16393d84450e58aa87720368e4a74838" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.649593 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.661156 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.676741 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:32:49 crc kubenswrapper[4688]: E1125 12:32:49.677235 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e944f85-67c1-480a-8656-e34aba801d33" containerName="dnsmasq-dns" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677263 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e944f85-67c1-480a-8656-e34aba801d33" containerName="dnsmasq-dns" Nov 25 12:32:49 crc kubenswrapper[4688]: E1125 12:32:49.677275 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="sg-core" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677283 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="sg-core" Nov 25 12:32:49 crc kubenswrapper[4688]: E1125 12:32:49.677299 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" containerName="heat-db-sync" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677307 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" containerName="heat-db-sync" Nov 25 12:32:49 crc kubenswrapper[4688]: E1125 12:32:49.677324 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="proxy-httpd" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677332 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="proxy-httpd" Nov 25 12:32:49 crc kubenswrapper[4688]: E1125 12:32:49.677350 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="ceilometer-notification-agent" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677358 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="ceilometer-notification-agent" Nov 25 12:32:49 crc kubenswrapper[4688]: E1125 12:32:49.677403 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e944f85-67c1-480a-8656-e34aba801d33" containerName="init" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677412 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e944f85-67c1-480a-8656-e34aba801d33" containerName="init" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677641 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="ceilometer-notification-agent" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677678 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" containerName="heat-db-sync" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677691 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="proxy-httpd" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677710 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e944f85-67c1-480a-8656-e34aba801d33" containerName="dnsmasq-dns" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.677726 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" containerName="sg-core" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.679857 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.681937 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.682103 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.686999 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.810737 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.810797 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-run-httpd\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.810829 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6f7j\" (UniqueName: \"kubernetes.io/projected/12a65ac1-2346-451a-a4cb-33286a015370-kube-api-access-j6f7j\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.810908 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-scripts\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.810942 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-config-data\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.810971 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.811033 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-log-httpd\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.912613 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6f7j\" (UniqueName: \"kubernetes.io/projected/12a65ac1-2346-451a-a4cb-33286a015370-kube-api-access-j6f7j\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.912719 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-scripts\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.912765 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-config-data\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.912794 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.912854 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-log-httpd\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.912911 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.912937 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-run-httpd\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.913543 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-run-httpd\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.913590 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-log-httpd\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.918477 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.918853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-scripts\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.919693 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.919755 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-config-data\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:49 crc kubenswrapper[4688]: I1125 12:32:49.942739 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6f7j\" (UniqueName: \"kubernetes.io/projected/12a65ac1-2346-451a-a4cb-33286a015370-kube-api-access-j6f7j\") pod \"ceilometer-0\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " pod="openstack/ceilometer-0" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.001841 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:32:50 crc kubenswrapper[4688]: W1125 12:32:50.427601 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12a65ac1_2346_451a_a4cb_33286a015370.slice/crio-14b4c7c265b6cbf7146b46f7592244383fea1e00c590da4da200155b952d83c1 WatchSource:0}: Error finding container 14b4c7c265b6cbf7146b46f7592244383fea1e00c590da4da200155b952d83c1: Status 404 returned error can't find the container with id 14b4c7c265b6cbf7146b46f7592244383fea1e00c590da4da200155b952d83c1 Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.428102 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.567145 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerStarted","Data":"14b4c7c265b6cbf7146b46f7592244383fea1e00c590da4da200155b952d83c1"} Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.755070 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2611fb13-90fc-4310-a8dc-c224f4689a9f" path="/var/lib/kubelet/pods/2611fb13-90fc-4310-a8dc-c224f4689a9f/volumes" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.885880 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-slb7z" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.931658 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/588d841f-905c-42bb-9242-2e86b7e66877-etc-machine-id\") pod \"588d841f-905c-42bb-9242-2e86b7e66877\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.931708 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-config-data\") pod \"588d841f-905c-42bb-9242-2e86b7e66877\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.931735 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-combined-ca-bundle\") pod \"588d841f-905c-42bb-9242-2e86b7e66877\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.931765 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/588d841f-905c-42bb-9242-2e86b7e66877-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "588d841f-905c-42bb-9242-2e86b7e66877" (UID: "588d841f-905c-42bb-9242-2e86b7e66877"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.931781 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbwn\" (UniqueName: \"kubernetes.io/projected/588d841f-905c-42bb-9242-2e86b7e66877-kube-api-access-jbbwn\") pod \"588d841f-905c-42bb-9242-2e86b7e66877\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.931801 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-scripts\") pod \"588d841f-905c-42bb-9242-2e86b7e66877\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.931888 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-db-sync-config-data\") pod \"588d841f-905c-42bb-9242-2e86b7e66877\" (UID: \"588d841f-905c-42bb-9242-2e86b7e66877\") " Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.932254 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/588d841f-905c-42bb-9242-2e86b7e66877-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.940690 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "588d841f-905c-42bb-9242-2e86b7e66877" (UID: "588d841f-905c-42bb-9242-2e86b7e66877"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.942284 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588d841f-905c-42bb-9242-2e86b7e66877-kube-api-access-jbbwn" (OuterVolumeSpecName: "kube-api-access-jbbwn") pod "588d841f-905c-42bb-9242-2e86b7e66877" (UID: "588d841f-905c-42bb-9242-2e86b7e66877"). InnerVolumeSpecName "kube-api-access-jbbwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.952160 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-scripts" (OuterVolumeSpecName: "scripts") pod "588d841f-905c-42bb-9242-2e86b7e66877" (UID: "588d841f-905c-42bb-9242-2e86b7e66877"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.967564 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "588d841f-905c-42bb-9242-2e86b7e66877" (UID: "588d841f-905c-42bb-9242-2e86b7e66877"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:50 crc kubenswrapper[4688]: I1125 12:32:50.996812 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-config-data" (OuterVolumeSpecName: "config-data") pod "588d841f-905c-42bb-9242-2e86b7e66877" (UID: "588d841f-905c-42bb-9242-2e86b7e66877"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.034575 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.034620 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.034631 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.034644 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbwn\" (UniqueName: \"kubernetes.io/projected/588d841f-905c-42bb-9242-2e86b7e66877-kube-api-access-jbbwn\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.034657 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/588d841f-905c-42bb-9242-2e86b7e66877-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.575758 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-slb7z" event={"ID":"588d841f-905c-42bb-9242-2e86b7e66877","Type":"ContainerDied","Data":"47c4ce26c1183d67481f0b0b0117b5afbaca498b91b1beb03bcca855261c6342"} Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.576051 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47c4ce26c1183d67481f0b0b0117b5afbaca498b91b1beb03bcca855261c6342" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.576017 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-slb7z" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.577654 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerStarted","Data":"c88bf2ba39592659f642aba45bbedd6f4bbd7046240427294653f66e4b94df02"} Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.842372 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:32:51 crc kubenswrapper[4688]: E1125 12:32:51.842886 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588d841f-905c-42bb-9242-2e86b7e66877" containerName="cinder-db-sync" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.842904 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="588d841f-905c-42bb-9242-2e86b7e66877" containerName="cinder-db-sync" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.843132 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="588d841f-905c-42bb-9242-2e86b7e66877" containerName="cinder-db-sync" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.846480 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.850401 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kz9g5" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.850680 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.856422 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.856624 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.864403 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.918468 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bd78x"] Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.920378 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.934046 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bd78x"] Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.954942 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.954996 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.955143 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.955344 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9633d99f-02a1-4737-a439-01bfe5d79d6f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.955507 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:51 crc kubenswrapper[4688]: I1125 12:32:51.955596 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2gw\" (UniqueName: \"kubernetes.io/projected/9633d99f-02a1-4737-a439-01bfe5d79d6f-kube-api-access-vs2gw\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.070759 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.070828 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.070861 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs2gw\" (UniqueName: \"kubernetes.io/projected/9633d99f-02a1-4737-a439-01bfe5d79d6f-kube-api-access-vs2gw\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.070898 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.070929 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8sl9\" (UniqueName: \"kubernetes.io/projected/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-kube-api-access-h8sl9\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.070988 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.071018 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.071053 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.071080 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.071124 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.071184 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9633d99f-02a1-4737-a439-01bfe5d79d6f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.071201 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-config\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.073293 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9633d99f-02a1-4737-a439-01bfe5d79d6f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.075696 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.084694 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.086191 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.089301 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.099923 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs2gw\" (UniqueName: \"kubernetes.io/projected/9633d99f-02a1-4737-a439-01bfe5d79d6f-kube-api-access-vs2gw\") pod \"cinder-scheduler-0\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.128585 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.130886 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.133301 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.149146 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.172679 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.172743 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8sl9\" (UniqueName: \"kubernetes.io/projected/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-kube-api-access-h8sl9\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.172916 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.173021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.173224 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-config\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.173311 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.174437 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.175659 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.176179 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.177889 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.181165 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-config\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.182671 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.192310 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8sl9\" (UniqueName: \"kubernetes.io/projected/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-kube-api-access-h8sl9\") pod \"dnsmasq-dns-5c9776ccc5-bd78x\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.239974 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.276073 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf8pt\" (UniqueName: \"kubernetes.io/projected/37c87f58-1895-42e8-96eb-b8e12b5ad69c-kube-api-access-wf8pt\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.276165 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data-custom\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.276207 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c87f58-1895-42e8-96eb-b8e12b5ad69c-logs\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.276244 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.276321 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.276354 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37c87f58-1895-42e8-96eb-b8e12b5ad69c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.276401 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-scripts\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.378329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.378787 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37c87f58-1895-42e8-96eb-b8e12b5ad69c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.378825 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-scripts\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.378879 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf8pt\" (UniqueName: \"kubernetes.io/projected/37c87f58-1895-42e8-96eb-b8e12b5ad69c-kube-api-access-wf8pt\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.378931 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data-custom\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.378964 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c87f58-1895-42e8-96eb-b8e12b5ad69c-logs\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.379002 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.381111 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37c87f58-1895-42e8-96eb-b8e12b5ad69c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.384455 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-scripts\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.385061 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.385208 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.385366 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c87f58-1895-42e8-96eb-b8e12b5ad69c-logs\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.387394 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data-custom\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.399506 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf8pt\" (UniqueName: \"kubernetes.io/projected/37c87f58-1895-42e8-96eb-b8e12b5ad69c-kube-api-access-wf8pt\") pod \"cinder-api-0\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.575026 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.592223 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerStarted","Data":"c041edd72ce2c13278ce43c4dc0129671f9fa55433e0a9d55bb9a1781b8cec87"} Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.738989 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:32:52 crc kubenswrapper[4688]: I1125 12:32:52.843244 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bd78x"] Nov 25 12:32:53 crc kubenswrapper[4688]: I1125 12:32:53.127374 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:53 crc kubenswrapper[4688]: I1125 12:32:53.621340 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerStarted","Data":"ddca4c2c0d3cb1b55ca5d741454911767e00bdadd65ad40a2067cba0166644ad"} Nov 25 12:32:53 crc kubenswrapper[4688]: I1125 12:32:53.625365 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37c87f58-1895-42e8-96eb-b8e12b5ad69c","Type":"ContainerStarted","Data":"ab445472b09d0c9a631b183d106952c119d74e2c06abf9a675b461b3164dc3cb"} Nov 25 12:32:53 crc kubenswrapper[4688]: I1125 12:32:53.632320 4688 generic.go:334] "Generic (PLEG): container finished" podID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerID="5817c49d63c2e55225b8fbc2083fa30823bae14e95fdb912eae8d2327886ee80" exitCode=0 Nov 25 12:32:53 crc kubenswrapper[4688]: I1125 12:32:53.632397 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" event={"ID":"4bc73c38-2fd7-464c-99a3-4fb5fea684c8","Type":"ContainerDied","Data":"5817c49d63c2e55225b8fbc2083fa30823bae14e95fdb912eae8d2327886ee80"} Nov 25 12:32:53 crc kubenswrapper[4688]: I1125 12:32:53.632420 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" event={"ID":"4bc73c38-2fd7-464c-99a3-4fb5fea684c8","Type":"ContainerStarted","Data":"b082472ff2b4610ba5315dfa4a83a87b3029b422701320e2171f90d852f0e7c5"} Nov 25 12:32:53 crc kubenswrapper[4688]: I1125 12:32:53.635964 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9633d99f-02a1-4737-a439-01bfe5d79d6f","Type":"ContainerStarted","Data":"191e9fa27be5db977b875aac0365ac5853ea1e7227f75777124296872a8c28b2"} Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.384888 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.668133 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37c87f58-1895-42e8-96eb-b8e12b5ad69c","Type":"ContainerStarted","Data":"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160"} Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.671488 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" event={"ID":"4bc73c38-2fd7-464c-99a3-4fb5fea684c8","Type":"ContainerStarted","Data":"6353734e7aae2f0844dc5ae9d6a1094b4ae12f5d548df5c61d41f9ffde3fad62"} Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.671768 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.701084 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" podStartSLOduration=3.701066812 podStartE2EDuration="3.701066812s" podCreationTimestamp="2025-11-25 12:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:54.698963726 +0000 UTC m=+1124.808592594" watchObservedRunningTime="2025-11-25 12:32:54.701066812 +0000 UTC m=+1124.810695680" Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.794106 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.874891 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c8499f65b-2b8f7" Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.938870 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59c48fd54-hnlcs"] Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.939193 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59c48fd54-hnlcs" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api-log" containerID="cri-o://8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19" gracePeriod=30 Nov 25 12:32:54 crc kubenswrapper[4688]: I1125 12:32:54.939719 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59c48fd54-hnlcs" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api" containerID="cri-o://d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1" gracePeriod=30 Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.692835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37c87f58-1895-42e8-96eb-b8e12b5ad69c","Type":"ContainerStarted","Data":"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c"} Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.693379 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.693054 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerName="cinder-api" containerID="cri-o://fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c" gracePeriod=30 Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.692952 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerName="cinder-api-log" containerID="cri-o://5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160" gracePeriod=30 Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.700188 4688 generic.go:334] "Generic (PLEG): container finished" podID="80800696-d4a7-436c-82b5-0f627dd289db" containerID="8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19" exitCode=143 Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.700288 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c48fd54-hnlcs" event={"ID":"80800696-d4a7-436c-82b5-0f627dd289db","Type":"ContainerDied","Data":"8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19"} Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.706235 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9633d99f-02a1-4737-a439-01bfe5d79d6f","Type":"ContainerStarted","Data":"79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6"} Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.706299 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9633d99f-02a1-4737-a439-01bfe5d79d6f","Type":"ContainerStarted","Data":"8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8"} Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.712511 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerStarted","Data":"9f28bc1ec3cb2644230c796092dbaa4cd8a5bbf62d94459a7e416f72d9e439a7"} Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.721797 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.721770935 podStartE2EDuration="3.721770935s" podCreationTimestamp="2025-11-25 12:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:55.720517743 +0000 UTC m=+1125.830146611" watchObservedRunningTime="2025-11-25 12:32:55.721770935 +0000 UTC m=+1125.831399803" Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.755570 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.725720102 podStartE2EDuration="6.755551433s" podCreationTimestamp="2025-11-25 12:32:49 +0000 UTC" firstStartedPulling="2025-11-25 12:32:50.429824268 +0000 UTC m=+1120.539453136" lastFinishedPulling="2025-11-25 12:32:54.459655599 +0000 UTC m=+1124.569284467" observedRunningTime="2025-11-25 12:32:55.746368756 +0000 UTC m=+1125.855997634" watchObservedRunningTime="2025-11-25 12:32:55.755551433 +0000 UTC m=+1125.865180301" Nov 25 12:32:55 crc kubenswrapper[4688]: I1125 12:32:55.784904 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.113009167 podStartE2EDuration="4.784886481s" podCreationTimestamp="2025-11-25 12:32:51 +0000 UTC" firstStartedPulling="2025-11-25 12:32:52.788577418 +0000 UTC m=+1122.898206286" lastFinishedPulling="2025-11-25 12:32:53.460454722 +0000 UTC m=+1123.570083600" observedRunningTime="2025-11-25 12:32:55.783342479 +0000 UTC m=+1125.892971357" watchObservedRunningTime="2025-11-25 12:32:55.784886481 +0000 UTC m=+1125.894515349" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.498120 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.572478 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data\") pod \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.572622 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c87f58-1895-42e8-96eb-b8e12b5ad69c-logs\") pod \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.572715 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data-custom\") pod \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.572794 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf8pt\" (UniqueName: \"kubernetes.io/projected/37c87f58-1895-42e8-96eb-b8e12b5ad69c-kube-api-access-wf8pt\") pod \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.572827 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-scripts\") pod \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.572861 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-combined-ca-bundle\") pod \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.572952 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37c87f58-1895-42e8-96eb-b8e12b5ad69c-etc-machine-id\") pod \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\" (UID: \"37c87f58-1895-42e8-96eb-b8e12b5ad69c\") " Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.573553 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37c87f58-1895-42e8-96eb-b8e12b5ad69c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37c87f58-1895-42e8-96eb-b8e12b5ad69c" (UID: "37c87f58-1895-42e8-96eb-b8e12b5ad69c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.573884 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c87f58-1895-42e8-96eb-b8e12b5ad69c-logs" (OuterVolumeSpecName: "logs") pod "37c87f58-1895-42e8-96eb-b8e12b5ad69c" (UID: "37c87f58-1895-42e8-96eb-b8e12b5ad69c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.573978 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37c87f58-1895-42e8-96eb-b8e12b5ad69c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.583853 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-scripts" (OuterVolumeSpecName: "scripts") pod "37c87f58-1895-42e8-96eb-b8e12b5ad69c" (UID: "37c87f58-1895-42e8-96eb-b8e12b5ad69c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.592201 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c87f58-1895-42e8-96eb-b8e12b5ad69c-kube-api-access-wf8pt" (OuterVolumeSpecName: "kube-api-access-wf8pt") pod "37c87f58-1895-42e8-96eb-b8e12b5ad69c" (UID: "37c87f58-1895-42e8-96eb-b8e12b5ad69c"). InnerVolumeSpecName "kube-api-access-wf8pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.592450 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37c87f58-1895-42e8-96eb-b8e12b5ad69c" (UID: "37c87f58-1895-42e8-96eb-b8e12b5ad69c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.608845 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37c87f58-1895-42e8-96eb-b8e12b5ad69c" (UID: "37c87f58-1895-42e8-96eb-b8e12b5ad69c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.642244 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data" (OuterVolumeSpecName: "config-data") pod "37c87f58-1895-42e8-96eb-b8e12b5ad69c" (UID: "37c87f58-1895-42e8-96eb-b8e12b5ad69c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.675749 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.675791 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37c87f58-1895-42e8-96eb-b8e12b5ad69c-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.675804 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.675821 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf8pt\" (UniqueName: \"kubernetes.io/projected/37c87f58-1895-42e8-96eb-b8e12b5ad69c-kube-api-access-wf8pt\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.675835 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.675846 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c87f58-1895-42e8-96eb-b8e12b5ad69c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.724628 4688 generic.go:334] "Generic (PLEG): container finished" podID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerID="fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c" exitCode=0 Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.724668 4688 generic.go:334] "Generic (PLEG): container finished" podID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerID="5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160" exitCode=143 Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.725785 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37c87f58-1895-42e8-96eb-b8e12b5ad69c","Type":"ContainerDied","Data":"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c"} Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.725820 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37c87f58-1895-42e8-96eb-b8e12b5ad69c","Type":"ContainerDied","Data":"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160"} Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.725832 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37c87f58-1895-42e8-96eb-b8e12b5ad69c","Type":"ContainerDied","Data":"ab445472b09d0c9a631b183d106952c119d74e2c06abf9a675b461b3164dc3cb"} Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.725846 4688 scope.go:117] "RemoveContainer" containerID="fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.725914 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.725963 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.764106 4688 scope.go:117] "RemoveContainer" containerID="5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.830107 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.865008 4688 scope.go:117] "RemoveContainer" containerID="fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.866676 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:56 crc kubenswrapper[4688]: E1125 12:32:56.870735 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c\": container with ID starting with fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c not found: ID does not exist" containerID="fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.870800 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c"} err="failed to get container status \"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c\": rpc error: code = NotFound desc = could not find container \"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c\": container with ID starting with fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c not found: ID does not exist" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.870837 4688 scope.go:117] "RemoveContainer" containerID="5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160" Nov 25 12:32:56 crc kubenswrapper[4688]: E1125 12:32:56.871373 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160\": container with ID starting with 5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160 not found: ID does not exist" containerID="5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.871420 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160"} err="failed to get container status \"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160\": rpc error: code = NotFound desc = could not find container \"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160\": container with ID starting with 5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160 not found: ID does not exist" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.871459 4688 scope.go:117] "RemoveContainer" containerID="fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.871993 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c"} err="failed to get container status \"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c\": rpc error: code = NotFound desc = could not find container \"fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c\": container with ID starting with fe7d3284f0f1565665fd109d5fd9169dfe9778583f3e4dead55b4e59740b8b3c not found: ID does not exist" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.872040 4688 scope.go:117] "RemoveContainer" containerID="5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.872362 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160"} err="failed to get container status \"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160\": rpc error: code = NotFound desc = could not find container \"5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160\": container with ID starting with 5f0aa42ca2fed14f6773d921ba91e2debf38ddb1a1bfb74a4e3db94274442160 not found: ID does not exist" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.877778 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:56 crc kubenswrapper[4688]: E1125 12:32:56.878255 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerName="cinder-api" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.878281 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerName="cinder-api" Nov 25 12:32:56 crc kubenswrapper[4688]: E1125 12:32:56.878312 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerName="cinder-api-log" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.878322 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerName="cinder-api-log" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.878633 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerName="cinder-api" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.878661 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" containerName="cinder-api-log" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.892350 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.892485 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.896165 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.896455 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.896668 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982042 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-config-data\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982095 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982127 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982165 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982196 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-logs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982226 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982246 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgqc\" (UniqueName: \"kubernetes.io/projected/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-kube-api-access-nwgqc\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982328 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:56 crc kubenswrapper[4688]: I1125 12:32:56.982359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-scripts\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084536 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-scripts\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084597 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-config-data\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084620 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084641 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084668 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084689 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-logs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084708 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.084724 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgqc\" (UniqueName: \"kubernetes.io/projected/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-kube-api-access-nwgqc\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.085178 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-logs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.085248 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.088077 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.089221 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.091425 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-config-data\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.092939 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.099874 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.108630 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-scripts\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.112089 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgqc\" (UniqueName: \"kubernetes.io/projected/6a92f2f8-f7d9-42da-8f61-d595c6e2e10b-kube-api-access-nwgqc\") pod \"cinder-api-0\" (UID: \"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b\") " pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.183613 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.237394 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 12:32:57 crc kubenswrapper[4688]: I1125 12:32:57.724041 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.376493 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.680827 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.734137 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhdgr\" (UniqueName: \"kubernetes.io/projected/80800696-d4a7-436c-82b5-0f627dd289db-kube-api-access-fhdgr\") pod \"80800696-d4a7-436c-82b5-0f627dd289db\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.734198 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data\") pod \"80800696-d4a7-436c-82b5-0f627dd289db\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.734255 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80800696-d4a7-436c-82b5-0f627dd289db-logs\") pod \"80800696-d4a7-436c-82b5-0f627dd289db\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.734513 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-combined-ca-bundle\") pod \"80800696-d4a7-436c-82b5-0f627dd289db\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.734605 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data-custom\") pod \"80800696-d4a7-436c-82b5-0f627dd289db\" (UID: \"80800696-d4a7-436c-82b5-0f627dd289db\") " Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.735183 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80800696-d4a7-436c-82b5-0f627dd289db-logs" (OuterVolumeSpecName: "logs") pod "80800696-d4a7-436c-82b5-0f627dd289db" (UID: "80800696-d4a7-436c-82b5-0f627dd289db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.735507 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80800696-d4a7-436c-82b5-0f627dd289db-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.746965 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80800696-d4a7-436c-82b5-0f627dd289db-kube-api-access-fhdgr" (OuterVolumeSpecName: "kube-api-access-fhdgr") pod "80800696-d4a7-436c-82b5-0f627dd289db" (UID: "80800696-d4a7-436c-82b5-0f627dd289db"). InnerVolumeSpecName "kube-api-access-fhdgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.747255 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "80800696-d4a7-436c-82b5-0f627dd289db" (UID: "80800696-d4a7-436c-82b5-0f627dd289db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.760902 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c87f58-1895-42e8-96eb-b8e12b5ad69c" path="/var/lib/kubelet/pods/37c87f58-1895-42e8-96eb-b8e12b5ad69c/volumes" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.774871 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80800696-d4a7-436c-82b5-0f627dd289db" (UID: "80800696-d4a7-436c-82b5-0f627dd289db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.775313 4688 generic.go:334] "Generic (PLEG): container finished" podID="80800696-d4a7-436c-82b5-0f627dd289db" containerID="d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1" exitCode=0 Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.775404 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c48fd54-hnlcs" event={"ID":"80800696-d4a7-436c-82b5-0f627dd289db","Type":"ContainerDied","Data":"d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1"} Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.775451 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c48fd54-hnlcs" event={"ID":"80800696-d4a7-436c-82b5-0f627dd289db","Type":"ContainerDied","Data":"4983496bb9ed718d9ff2c1ad78b03c8eafef938207012fc918728b30efcf9f79"} Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.775468 4688 scope.go:117] "RemoveContainer" containerID="d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.775646 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c48fd54-hnlcs" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.783413 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b","Type":"ContainerStarted","Data":"312a81b8da9919bc51644066921c0ee722d379b05bb82bd562e0512bd55165fb"} Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.783490 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b","Type":"ContainerStarted","Data":"cd8b6a903f8a76527dd435a67c050c6448594559073cdf230c4396e758a69705"} Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.817294 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data" (OuterVolumeSpecName: "config-data") pod "80800696-d4a7-436c-82b5-0f627dd289db" (UID: "80800696-d4a7-436c-82b5-0f627dd289db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.836319 4688 scope.go:117] "RemoveContainer" containerID="8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.837482 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.837511 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.837544 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhdgr\" (UniqueName: \"kubernetes.io/projected/80800696-d4a7-436c-82b5-0f627dd289db-kube-api-access-fhdgr\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.837564 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80800696-d4a7-436c-82b5-0f627dd289db-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.881248 4688 scope.go:117] "RemoveContainer" containerID="d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1" Nov 25 12:32:58 crc kubenswrapper[4688]: E1125 12:32:58.882228 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1\": container with ID starting with d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1 not found: ID does not exist" containerID="d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.882281 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1"} err="failed to get container status \"d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1\": rpc error: code = NotFound desc = could not find container \"d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1\": container with ID starting with d7e94e03ec2e6c58724586179e97e6882cf5e489e4999c1a54995f491c1f69a1 not found: ID does not exist" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.882320 4688 scope.go:117] "RemoveContainer" containerID="8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19" Nov 25 12:32:58 crc kubenswrapper[4688]: E1125 12:32:58.882889 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19\": container with ID starting with 8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19 not found: ID does not exist" containerID="8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19" Nov 25 12:32:58 crc kubenswrapper[4688]: I1125 12:32:58.882916 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19"} err="failed to get container status \"8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19\": rpc error: code = NotFound desc = could not find container \"8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19\": container with ID starting with 8be227cb2384199655bbf1cf6906225cb741d5e0da4a1d8dfa7f6ce23397fe19 not found: ID does not exist" Nov 25 12:32:59 crc kubenswrapper[4688]: I1125 12:32:59.113713 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59c48fd54-hnlcs"] Nov 25 12:32:59 crc kubenswrapper[4688]: I1125 12:32:59.134269 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-59c48fd54-hnlcs"] Nov 25 12:32:59 crc kubenswrapper[4688]: I1125 12:32:59.816170 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6a92f2f8-f7d9-42da-8f61-d595c6e2e10b","Type":"ContainerStarted","Data":"7606fc19d16fbd8d99fa5783d34a37211a4a70e8edcc07a281252dd6be0b25f0"} Nov 25 12:32:59 crc kubenswrapper[4688]: I1125 12:32:59.816434 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 12:32:59 crc kubenswrapper[4688]: I1125 12:32:59.848122 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.848099119 podStartE2EDuration="3.848099119s" podCreationTimestamp="2025-11-25 12:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:32:59.832192171 +0000 UTC m=+1129.941821059" watchObservedRunningTime="2025-11-25 12:32:59.848099119 +0000 UTC m=+1129.957727987" Nov 25 12:33:00 crc kubenswrapper[4688]: I1125 12:33:00.533007 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f854495df-t6szb" Nov 25 12:33:00 crc kubenswrapper[4688]: I1125 12:33:00.616646 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64d655c956-kd82z"] Nov 25 12:33:00 crc kubenswrapper[4688]: I1125 12:33:00.617044 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64d655c956-kd82z" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerName="neutron-api" containerID="cri-o://ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1" gracePeriod=30 Nov 25 12:33:00 crc kubenswrapper[4688]: I1125 12:33:00.617839 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64d655c956-kd82z" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerName="neutron-httpd" containerID="cri-o://63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97" gracePeriod=30 Nov 25 12:33:00 crc kubenswrapper[4688]: I1125 12:33:00.758963 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80800696-d4a7-436c-82b5-0f627dd289db" path="/var/lib/kubelet/pods/80800696-d4a7-436c-82b5-0f627dd289db/volumes" Nov 25 12:33:01 crc kubenswrapper[4688]: I1125 12:33:01.845563 4688 generic.go:334] "Generic (PLEG): container finished" podID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerID="63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97" exitCode=0 Nov 25 12:33:01 crc kubenswrapper[4688]: I1125 12:33:01.845628 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d655c956-kd82z" event={"ID":"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee","Type":"ContainerDied","Data":"63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97"} Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.241682 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.310111 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ljt6t"] Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.310369 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" podUID="cb7419cf-e7db-4199-9ffa-426db08c3b43" containerName="dnsmasq-dns" containerID="cri-o://1700eb6e04dcc5607550ab26263f99417b3aa0b8dac371f13dab0b251c3dbb67" gracePeriod=10 Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.460030 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.537593 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.889837 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb7419cf-e7db-4199-9ffa-426db08c3b43" containerID="1700eb6e04dcc5607550ab26263f99417b3aa0b8dac371f13dab0b251c3dbb67" exitCode=0 Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.890969 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerName="cinder-scheduler" containerID="cri-o://8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8" gracePeriod=30 Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.890111 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" event={"ID":"cb7419cf-e7db-4199-9ffa-426db08c3b43","Type":"ContainerDied","Data":"1700eb6e04dcc5607550ab26263f99417b3aa0b8dac371f13dab0b251c3dbb67"} Nov 25 12:33:02 crc kubenswrapper[4688]: I1125 12:33:02.891351 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerName="probe" containerID="cri-o://79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6" gracePeriod=30 Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.008465 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.149898 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-nb\") pod \"cb7419cf-e7db-4199-9ffa-426db08c3b43\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.150494 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-svc\") pod \"cb7419cf-e7db-4199-9ffa-426db08c3b43\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.150928 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-sb\") pod \"cb7419cf-e7db-4199-9ffa-426db08c3b43\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.151050 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-swift-storage-0\") pod \"cb7419cf-e7db-4199-9ffa-426db08c3b43\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.151147 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-config\") pod \"cb7419cf-e7db-4199-9ffa-426db08c3b43\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.151357 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r52c2\" (UniqueName: \"kubernetes.io/projected/cb7419cf-e7db-4199-9ffa-426db08c3b43-kube-api-access-r52c2\") pod \"cb7419cf-e7db-4199-9ffa-426db08c3b43\" (UID: \"cb7419cf-e7db-4199-9ffa-426db08c3b43\") " Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.168341 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7419cf-e7db-4199-9ffa-426db08c3b43-kube-api-access-r52c2" (OuterVolumeSpecName: "kube-api-access-r52c2") pod "cb7419cf-e7db-4199-9ffa-426db08c3b43" (UID: "cb7419cf-e7db-4199-9ffa-426db08c3b43"). InnerVolumeSpecName "kube-api-access-r52c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.199737 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb7419cf-e7db-4199-9ffa-426db08c3b43" (UID: "cb7419cf-e7db-4199-9ffa-426db08c3b43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.211878 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb7419cf-e7db-4199-9ffa-426db08c3b43" (UID: "cb7419cf-e7db-4199-9ffa-426db08c3b43"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.213897 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb7419cf-e7db-4199-9ffa-426db08c3b43" (UID: "cb7419cf-e7db-4199-9ffa-426db08c3b43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.240110 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb7419cf-e7db-4199-9ffa-426db08c3b43" (UID: "cb7419cf-e7db-4199-9ffa-426db08c3b43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.245711 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-config" (OuterVolumeSpecName: "config") pod "cb7419cf-e7db-4199-9ffa-426db08c3b43" (UID: "cb7419cf-e7db-4199-9ffa-426db08c3b43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.253075 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r52c2\" (UniqueName: \"kubernetes.io/projected/cb7419cf-e7db-4199-9ffa-426db08c3b43-kube-api-access-r52c2\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.253111 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.253121 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.253130 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.253139 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.253146 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7419cf-e7db-4199-9ffa-426db08c3b43-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.899824 4688 generic.go:334] "Generic (PLEG): container finished" podID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerID="79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6" exitCode=0 Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.899909 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9633d99f-02a1-4737-a439-01bfe5d79d6f","Type":"ContainerDied","Data":"79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6"} Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.902688 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" event={"ID":"cb7419cf-e7db-4199-9ffa-426db08c3b43","Type":"ContainerDied","Data":"4425fb9261fd6033ed908c95ff0444050bd5ba3834422c9aa8c82c00995ea793"} Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.902763 4688 scope.go:117] "RemoveContainer" containerID="1700eb6e04dcc5607550ab26263f99417b3aa0b8dac371f13dab0b251c3dbb67" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.902865 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ljt6t" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.924890 4688 scope.go:117] "RemoveContainer" containerID="b058e397b490af581d8f1b9bb7fd2d328e501b3c4d4a3b7c68624b4d8b0a6aed" Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.940571 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ljt6t"] Nov 25 12:33:03 crc kubenswrapper[4688]: I1125 12:33:03.953566 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ljt6t"] Nov 25 12:33:04 crc kubenswrapper[4688]: I1125 12:33:04.097396 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-544b4d8674-8x8rj" Nov 25 12:33:04 crc kubenswrapper[4688]: I1125 12:33:04.752984 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7419cf-e7db-4199-9ffa-426db08c3b43" path="/var/lib/kubelet/pods/cb7419cf-e7db-4199-9ffa-426db08c3b43/volumes" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.609089 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.711638 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-scripts\") pod \"9633d99f-02a1-4737-a439-01bfe5d79d6f\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.711715 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs2gw\" (UniqueName: \"kubernetes.io/projected/9633d99f-02a1-4737-a439-01bfe5d79d6f-kube-api-access-vs2gw\") pod \"9633d99f-02a1-4737-a439-01bfe5d79d6f\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.711869 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9633d99f-02a1-4737-a439-01bfe5d79d6f-etc-machine-id\") pod \"9633d99f-02a1-4737-a439-01bfe5d79d6f\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.711910 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data\") pod \"9633d99f-02a1-4737-a439-01bfe5d79d6f\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.711992 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9633d99f-02a1-4737-a439-01bfe5d79d6f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9633d99f-02a1-4737-a439-01bfe5d79d6f" (UID: "9633d99f-02a1-4737-a439-01bfe5d79d6f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.712004 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data-custom\") pod \"9633d99f-02a1-4737-a439-01bfe5d79d6f\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.712116 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-combined-ca-bundle\") pod \"9633d99f-02a1-4737-a439-01bfe5d79d6f\" (UID: \"9633d99f-02a1-4737-a439-01bfe5d79d6f\") " Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.712647 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9633d99f-02a1-4737-a439-01bfe5d79d6f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.718864 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9633d99f-02a1-4737-a439-01bfe5d79d6f" (UID: "9633d99f-02a1-4737-a439-01bfe5d79d6f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.720130 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9633d99f-02a1-4737-a439-01bfe5d79d6f-kube-api-access-vs2gw" (OuterVolumeSpecName: "kube-api-access-vs2gw") pod "9633d99f-02a1-4737-a439-01bfe5d79d6f" (UID: "9633d99f-02a1-4737-a439-01bfe5d79d6f"). InnerVolumeSpecName "kube-api-access-vs2gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.721722 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-scripts" (OuterVolumeSpecName: "scripts") pod "9633d99f-02a1-4737-a439-01bfe5d79d6f" (UID: "9633d99f-02a1-4737-a439-01bfe5d79d6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.777943 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9633d99f-02a1-4737-a439-01bfe5d79d6f" (UID: "9633d99f-02a1-4737-a439-01bfe5d79d6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.816631 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.816678 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.816725 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.816740 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs2gw\" (UniqueName: \"kubernetes.io/projected/9633d99f-02a1-4737-a439-01bfe5d79d6f-kube-api-access-vs2gw\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.818004 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data" (OuterVolumeSpecName: "config-data") pod "9633d99f-02a1-4737-a439-01bfe5d79d6f" (UID: "9633d99f-02a1-4737-a439-01bfe5d79d6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.918413 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9633d99f-02a1-4737-a439-01bfe5d79d6f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.932746 4688 generic.go:334] "Generic (PLEG): container finished" podID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerID="8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8" exitCode=0 Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.932792 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9633d99f-02a1-4737-a439-01bfe5d79d6f","Type":"ContainerDied","Data":"8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8"} Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.932804 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.932851 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9633d99f-02a1-4737-a439-01bfe5d79d6f","Type":"ContainerDied","Data":"191e9fa27be5db977b875aac0365ac5853ea1e7227f75777124296872a8c28b2"} Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.932862 4688 scope.go:117] "RemoveContainer" containerID="79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6" Nov 25 12:33:06 crc kubenswrapper[4688]: I1125 12:33:06.965846 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.028440 4688 scope.go:117] "RemoveContainer" containerID="8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.042807 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.050607 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:33:07 crc kubenswrapper[4688]: E1125 12:33:07.051102 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051120 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api" Nov 25 12:33:07 crc kubenswrapper[4688]: E1125 12:33:07.051138 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerName="probe" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051146 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerName="probe" Nov 25 12:33:07 crc kubenswrapper[4688]: E1125 12:33:07.051159 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api-log" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051166 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api-log" Nov 25 12:33:07 crc kubenswrapper[4688]: E1125 12:33:07.051177 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7419cf-e7db-4199-9ffa-426db08c3b43" containerName="dnsmasq-dns" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051185 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7419cf-e7db-4199-9ffa-426db08c3b43" containerName="dnsmasq-dns" Nov 25 12:33:07 crc kubenswrapper[4688]: E1125 12:33:07.051211 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerName="cinder-scheduler" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051221 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerName="cinder-scheduler" Nov 25 12:33:07 crc kubenswrapper[4688]: E1125 12:33:07.051243 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7419cf-e7db-4199-9ffa-426db08c3b43" containerName="init" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051251 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7419cf-e7db-4199-9ffa-426db08c3b43" containerName="init" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051487 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerName="cinder-scheduler" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051504 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" containerName="probe" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051514 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api-log" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051548 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="80800696-d4a7-436c-82b5-0f627dd289db" containerName="barbican-api" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.051564 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7419cf-e7db-4199-9ffa-426db08c3b43" containerName="dnsmasq-dns" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.052752 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.056699 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.057319 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.065739 4688 scope.go:117] "RemoveContainer" containerID="79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6" Nov 25 12:33:07 crc kubenswrapper[4688]: E1125 12:33:07.066434 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6\": container with ID starting with 79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6 not found: ID does not exist" containerID="79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.066480 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6"} err="failed to get container status \"79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6\": rpc error: code = NotFound desc = could not find container \"79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6\": container with ID starting with 79c2ba63a44855810570f276f4cafedf489385ab02122b6cfa463d3bb27ed8e6 not found: ID does not exist" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.066507 4688 scope.go:117] "RemoveContainer" containerID="8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8" Nov 25 12:33:07 crc kubenswrapper[4688]: E1125 12:33:07.067681 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8\": container with ID starting with 8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8 not found: ID does not exist" containerID="8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.067713 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8"} err="failed to get container status \"8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8\": rpc error: code = NotFound desc = could not find container \"8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8\": container with ID starting with 8a14c9b8e7d993688e0c34f69a0887e81868400e149e578373e015ee394da1d8 not found: ID does not exist" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.227858 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-config-data\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.227926 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.227958 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-scripts\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.227996 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.228318 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78027e7c-30ce-4ec6-b928-f9b1836c3568-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.228361 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t246h\" (UniqueName: \"kubernetes.io/projected/78027e7c-30ce-4ec6-b928-f9b1836c3568-kube-api-access-t246h\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.330699 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.330779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-scripts\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.330847 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.330959 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78027e7c-30ce-4ec6-b928-f9b1836c3568-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.330985 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t246h\" (UniqueName: \"kubernetes.io/projected/78027e7c-30ce-4ec6-b928-f9b1836c3568-kube-api-access-t246h\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.331061 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-config-data\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.332468 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78027e7c-30ce-4ec6-b928-f9b1836c3568-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.336833 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.338054 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-config-data\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.361346 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.362591 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78027e7c-30ce-4ec6-b928-f9b1836c3568-scripts\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.363658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t246h\" (UniqueName: \"kubernetes.io/projected/78027e7c-30ce-4ec6-b928-f9b1836c3568-kube-api-access-t246h\") pod \"cinder-scheduler-0\" (UID: \"78027e7c-30ce-4ec6-b928-f9b1836c3568\") " pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.376979 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.473865 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5dcbb6d5d7-bpx7g"] Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.475941 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.479839 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.480118 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.480606 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.506973 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5dcbb6d5d7-bpx7g"] Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.580041 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.638313 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-internal-tls-certs\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.638381 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-public-tls-certs\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.638701 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-config-data\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.638778 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqjcj\" (UniqueName: \"kubernetes.io/projected/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-kube-api-access-zqjcj\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.638911 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-run-httpd\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.638993 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-combined-ca-bundle\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.639099 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-etc-swift\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.639183 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-log-httpd\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.741367 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-combined-ca-bundle\") pod \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.741421 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-config\") pod \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.741453 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-httpd-config\") pod \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.741561 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnlfj\" (UniqueName: \"kubernetes.io/projected/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-kube-api-access-jnlfj\") pod \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742275 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-ovndb-tls-certs\") pod \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\" (UID: \"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee\") " Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742500 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-public-tls-certs\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-config-data\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742653 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqjcj\" (UniqueName: \"kubernetes.io/projected/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-kube-api-access-zqjcj\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742706 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-run-httpd\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-combined-ca-bundle\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742759 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-etc-swift\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742784 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-log-httpd\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.742818 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-internal-tls-certs\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.743399 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-run-httpd\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.747746 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-log-httpd\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.750462 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" (UID: "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.750603 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-kube-api-access-jnlfj" (OuterVolumeSpecName: "kube-api-access-jnlfj") pod "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" (UID: "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee"). InnerVolumeSpecName "kube-api-access-jnlfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.751400 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-internal-tls-certs\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.753905 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-combined-ca-bundle\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.754516 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-public-tls-certs\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.767718 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqjcj\" (UniqueName: \"kubernetes.io/projected/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-kube-api-access-zqjcj\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.767981 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-config-data\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.770398 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3f8ead-c9ee-4ce5-923a-558a17e1f688-etc-swift\") pod \"swift-proxy-5dcbb6d5d7-bpx7g\" (UID: \"2c3f8ead-c9ee-4ce5-923a-558a17e1f688\") " pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.806619 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-config" (OuterVolumeSpecName: "config") pod "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" (UID: "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.820972 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" (UID: "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.862011 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" (UID: "6d0462a2-2ad3-4b7a-9092-31fa724fb4ee"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.864506 4688 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.864611 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.864624 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.864636 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.864647 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnlfj\" (UniqueName: \"kubernetes.io/projected/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee-kube-api-access-jnlfj\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:07 crc kubenswrapper[4688]: I1125 12:33:07.874031 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.006187 4688 generic.go:334] "Generic (PLEG): container finished" podID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerID="ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1" exitCode=0 Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.006239 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d655c956-kd82z" event={"ID":"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee","Type":"ContainerDied","Data":"ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1"} Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.006271 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d655c956-kd82z" event={"ID":"6d0462a2-2ad3-4b7a-9092-31fa724fb4ee","Type":"ContainerDied","Data":"ccc1ea27ec67a65fefacd2c55ac451b4a870cd9ae8b90bd9c3278e2985a0cfa3"} Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.006291 4688 scope.go:117] "RemoveContainer" containerID="63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.006444 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d655c956-kd82z" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.012079 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 12:33:08 crc kubenswrapper[4688]: W1125 12:33:08.059357 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78027e7c_30ce_4ec6_b928_f9b1836c3568.slice/crio-7da1eff2b9c7fba2e2348fe1e9285efd99e06dcf9e8f470393fb7c030a121ea2 WatchSource:0}: Error finding container 7da1eff2b9c7fba2e2348fe1e9285efd99e06dcf9e8f470393fb7c030a121ea2: Status 404 returned error can't find the container with id 7da1eff2b9c7fba2e2348fe1e9285efd99e06dcf9e8f470393fb7c030a121ea2 Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.080162 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64d655c956-kd82z"] Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.117885 4688 scope.go:117] "RemoveContainer" containerID="ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.120349 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64d655c956-kd82z"] Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.193641 4688 scope.go:117] "RemoveContainer" containerID="63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97" Nov 25 12:33:08 crc kubenswrapper[4688]: E1125 12:33:08.195069 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97\": container with ID starting with 63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97 not found: ID does not exist" containerID="63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.195111 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97"} err="failed to get container status \"63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97\": rpc error: code = NotFound desc = could not find container \"63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97\": container with ID starting with 63659f7e5c7a94ff3feee392c07a1b136b3c4f044b1c12d1d8b172a5ab38db97 not found: ID does not exist" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.195134 4688 scope.go:117] "RemoveContainer" containerID="ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1" Nov 25 12:33:08 crc kubenswrapper[4688]: E1125 12:33:08.196144 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1\": container with ID starting with ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1 not found: ID does not exist" containerID="ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.196169 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1"} err="failed to get container status \"ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1\": rpc error: code = NotFound desc = could not find container \"ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1\": container with ID starting with ae93697fc67cb6d415262a61c6bde2a9bf410b9845c890d323969cbd2e3a2ea1 not found: ID does not exist" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.693131 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5dcbb6d5d7-bpx7g"] Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.754108 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" path="/var/lib/kubelet/pods/6d0462a2-2ad3-4b7a-9092-31fa724fb4ee/volumes" Nov 25 12:33:08 crc kubenswrapper[4688]: I1125 12:33:08.755168 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9633d99f-02a1-4737-a439-01bfe5d79d6f" path="/var/lib/kubelet/pods/9633d99f-02a1-4737-a439-01bfe5d79d6f/volumes" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.003508 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 12:33:09 crc kubenswrapper[4688]: E1125 12:33:09.004111 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerName="neutron-api" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.004126 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerName="neutron-api" Nov 25 12:33:09 crc kubenswrapper[4688]: E1125 12:33:09.004143 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerName="neutron-httpd" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.004150 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerName="neutron-httpd" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.004351 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerName="neutron-httpd" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.004370 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0462a2-2ad3-4b7a-9092-31fa724fb4ee" containerName="neutron-api" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.005674 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.007971 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-n5wsz" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.008066 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.017103 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.044550 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.058125 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" event={"ID":"2c3f8ead-c9ee-4ce5-923a-558a17e1f688","Type":"ContainerStarted","Data":"7c936a6d89229abb380acb8f3ace8e981236681151588b314ace1c2669dfd7ab"} Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.058185 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" event={"ID":"2c3f8ead-c9ee-4ce5-923a-558a17e1f688","Type":"ContainerStarted","Data":"746a7cd78cc0f1316c7a04855046ff0d5f4fe87c9d21466710536ef4d0803d63"} Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.064619 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78027e7c-30ce-4ec6-b928-f9b1836c3568","Type":"ContainerStarted","Data":"272fa25b29eaeb487e2f10bacac105112df96dec8d0b4c32880f366867070f7b"} Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.064658 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78027e7c-30ce-4ec6-b928-f9b1836c3568","Type":"ContainerStarted","Data":"7da1eff2b9c7fba2e2348fe1e9285efd99e06dcf9e8f470393fb7c030a121ea2"} Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.102592 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.102683 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.102720 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config-secret\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.102846 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6fdc\" (UniqueName: \"kubernetes.io/projected/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-kube-api-access-c6fdc\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.221159 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.221880 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.222654 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config-secret\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.222816 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6fdc\" (UniqueName: \"kubernetes.io/projected/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-kube-api-access-c6fdc\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.223943 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.228343 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.229250 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config-secret\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.248269 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6fdc\" (UniqueName: \"kubernetes.io/projected/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-kube-api-access-c6fdc\") pod \"openstackclient\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.327631 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.400297 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.426072 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.436221 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.437753 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.444461 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 12:33:09 crc kubenswrapper[4688]: E1125 12:33:09.543013 4688 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 12:33:09 crc kubenswrapper[4688]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_b2052cd2-0348-4c12-ac00-b221c4fa8dcc_0(4412b8cff87c549053da852792feb8a74620491e132534f1b13eb04d20ba9862): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4412b8cff87c549053da852792feb8a74620491e132534f1b13eb04d20ba9862" Netns:"/var/run/netns/c4e9cebf-2f24-4f59-bcc4-97f3faf79bb7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4412b8cff87c549053da852792feb8a74620491e132534f1b13eb04d20ba9862;K8S_POD_UID=b2052cd2-0348-4c12-ac00-b221c4fa8dcc" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/b2052cd2-0348-4c12-ac00-b221c4fa8dcc]: expected pod UID "b2052cd2-0348-4c12-ac00-b221c4fa8dcc" but got "fe63e1cf-543e-46d0-a4f8-0144f2201219" from Kube API Nov 25 12:33:09 crc kubenswrapper[4688]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 12:33:09 crc kubenswrapper[4688]: > Nov 25 12:33:09 crc kubenswrapper[4688]: E1125 12:33:09.543313 4688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 12:33:09 crc kubenswrapper[4688]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_b2052cd2-0348-4c12-ac00-b221c4fa8dcc_0(4412b8cff87c549053da852792feb8a74620491e132534f1b13eb04d20ba9862): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4412b8cff87c549053da852792feb8a74620491e132534f1b13eb04d20ba9862" Netns:"/var/run/netns/c4e9cebf-2f24-4f59-bcc4-97f3faf79bb7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4412b8cff87c549053da852792feb8a74620491e132534f1b13eb04d20ba9862;K8S_POD_UID=b2052cd2-0348-4c12-ac00-b221c4fa8dcc" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/b2052cd2-0348-4c12-ac00-b221c4fa8dcc]: expected pod UID "b2052cd2-0348-4c12-ac00-b221c4fa8dcc" but got "fe63e1cf-543e-46d0-a4f8-0144f2201219" from Kube API Nov 25 12:33:09 crc kubenswrapper[4688]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 12:33:09 crc kubenswrapper[4688]: > pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.631015 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.631456 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config-secret\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.631664 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.631700 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vnj\" (UniqueName: \"kubernetes.io/projected/fe63e1cf-543e-46d0-a4f8-0144f2201219-kube-api-access-52vnj\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.733486 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.733554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vnj\" (UniqueName: \"kubernetes.io/projected/fe63e1cf-543e-46d0-a4f8-0144f2201219-kube-api-access-52vnj\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.733670 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.733755 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config-secret\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.735856 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.740096 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config-secret\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.740231 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.757016 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vnj\" (UniqueName: \"kubernetes.io/projected/fe63e1cf-543e-46d0-a4f8-0144f2201219-kube-api-access-52vnj\") pod \"openstackclient\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " pod="openstack/openstackclient" Nov 25 12:33:09 crc kubenswrapper[4688]: I1125 12:33:09.797840 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.092827 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" event={"ID":"2c3f8ead-c9ee-4ce5-923a-558a17e1f688","Type":"ContainerStarted","Data":"f6d18c8d1c956a4fa203b3e168f1474f81e734da39435dfdbe9a3cc3df955e85"} Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.094679 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.094751 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.100443 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.100677 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78027e7c-30ce-4ec6-b928-f9b1836c3568","Type":"ContainerStarted","Data":"92a6e0d022ff35ceefa669dd2208cbd56ad6113315a0e41b01007d25002c93ae"} Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.113332 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.123876 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b2052cd2-0348-4c12-ac00-b221c4fa8dcc" podUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.157994 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.157973357 podStartE2EDuration="4.157973357s" podCreationTimestamp="2025-11-25 12:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:10.155812908 +0000 UTC m=+1140.265441796" watchObservedRunningTime="2025-11-25 12:33:10.157973357 +0000 UTC m=+1140.267602225" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.158118 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" podStartSLOduration=3.15811091 podStartE2EDuration="3.15811091s" podCreationTimestamp="2025-11-25 12:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:10.119796742 +0000 UTC m=+1140.229425620" watchObservedRunningTime="2025-11-25 12:33:10.15811091 +0000 UTC m=+1140.267739778" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.166910 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.243614 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-combined-ca-bundle\") pod \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.243761 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6fdc\" (UniqueName: \"kubernetes.io/projected/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-kube-api-access-c6fdc\") pod \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.243800 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config\") pod \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.243880 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config-secret\") pod \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\" (UID: \"b2052cd2-0348-4c12-ac00-b221c4fa8dcc\") " Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.247412 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b2052cd2-0348-4c12-ac00-b221c4fa8dcc" (UID: "b2052cd2-0348-4c12-ac00-b221c4fa8dcc"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.250877 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2052cd2-0348-4c12-ac00-b221c4fa8dcc" (UID: "b2052cd2-0348-4c12-ac00-b221c4fa8dcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.251388 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b2052cd2-0348-4c12-ac00-b221c4fa8dcc" (UID: "b2052cd2-0348-4c12-ac00-b221c4fa8dcc"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.251430 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-kube-api-access-c6fdc" (OuterVolumeSpecName: "kube-api-access-c6fdc") pod "b2052cd2-0348-4c12-ac00-b221c4fa8dcc" (UID: "b2052cd2-0348-4c12-ac00-b221c4fa8dcc"). InnerVolumeSpecName "kube-api-access-c6fdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.347008 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.347045 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6fdc\" (UniqueName: \"kubernetes.io/projected/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-kube-api-access-c6fdc\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.347059 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.347070 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b2052cd2-0348-4c12-ac00-b221c4fa8dcc-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.355895 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.643606 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6885f88968-shb6s"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.650609 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.653062 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gn9hd" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.653345 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.653595 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.683738 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6885f88968-shb6s"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.763257 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data-custom\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.763367 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.763413 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4dv\" (UniqueName: \"kubernetes.io/projected/de20cc3c-3de9-4e8a-97ba-203205cbb278-kube-api-access-hl4dv\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.763575 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-combined-ca-bundle\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.806372 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2052cd2-0348-4c12-ac00-b221c4fa8dcc" path="/var/lib/kubelet/pods/b2052cd2-0348-4c12-ac00-b221c4fa8dcc/volumes" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.807045 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b778675bb-bfk2b"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.811477 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zbs4x"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.813060 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b778675bb-bfk2b"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.813091 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zbs4x"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.813184 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.817842 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.823711 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-f7cb49745-qm48p"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.825267 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.827803 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.828652 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.877296 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-combined-ca-bundle\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.877405 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data-custom\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.877467 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.877503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl4dv\" (UniqueName: \"kubernetes.io/projected/de20cc3c-3de9-4e8a-97ba-203205cbb278-kube-api-access-hl4dv\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.900702 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data-custom\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.907557 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.912570 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-combined-ca-bundle\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.923805 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl4dv\" (UniqueName: \"kubernetes.io/projected/de20cc3c-3de9-4e8a-97ba-203205cbb278-kube-api-access-hl4dv\") pod \"heat-engine-6885f88968-shb6s\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.925111 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f7cb49745-qm48p"] Nov 25 12:33:10 crc kubenswrapper[4688]: I1125 12:33:10.983193 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:10.999912 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000322 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000376 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data-custom\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000428 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000475 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-config\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000506 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grh2l\" (UniqueName: \"kubernetes.io/projected/e75ca9ca-8146-4469-bab4-8db0e4735f0e-kube-api-access-grh2l\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000587 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000617 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-combined-ca-bundle\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000648 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000670 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbzgj\" (UniqueName: \"kubernetes.io/projected/a6212668-8e7b-4cd0-8e4c-c1de46191e97-kube-api-access-hbzgj\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000746 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000794 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data-custom\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000821 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm796\" (UniqueName: \"kubernetes.io/projected/32b7e898-1175-4763-b18b-cf98c2ca0982-kube-api-access-sm796\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.000862 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-combined-ca-bundle\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102351 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102407 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-config\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102435 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grh2l\" (UniqueName: \"kubernetes.io/projected/e75ca9ca-8146-4469-bab4-8db0e4735f0e-kube-api-access-grh2l\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102483 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-combined-ca-bundle\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102539 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102559 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbzgj\" (UniqueName: \"kubernetes.io/projected/a6212668-8e7b-4cd0-8e4c-c1de46191e97-kube-api-access-hbzgj\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102601 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102624 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data-custom\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102639 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm796\" (UniqueName: \"kubernetes.io/projected/32b7e898-1175-4763-b18b-cf98c2ca0982-kube-api-access-sm796\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102665 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-combined-ca-bundle\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102717 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.102742 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data-custom\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.104745 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.105174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.105376 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.107323 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-config\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.107701 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.111425 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data-custom\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.113075 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.115071 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.117241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data-custom\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.120645 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-combined-ca-bundle\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.121383 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-combined-ca-bundle\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.126482 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm796\" (UniqueName: \"kubernetes.io/projected/32b7e898-1175-4763-b18b-cf98c2ca0982-kube-api-access-sm796\") pod \"dnsmasq-dns-7756b9d78c-zbs4x\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.127399 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbzgj\" (UniqueName: \"kubernetes.io/projected/a6212668-8e7b-4cd0-8e4c-c1de46191e97-kube-api-access-hbzgj\") pod \"heat-api-f7cb49745-qm48p\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.127461 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fe63e1cf-543e-46d0-a4f8-0144f2201219","Type":"ContainerStarted","Data":"7b60327ce4ce0c3758e2ed30e6c0528ca4894c39532cdc037a61a784f1b6eb08"} Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.127598 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.128379 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grh2l\" (UniqueName: \"kubernetes.io/projected/e75ca9ca-8146-4469-bab4-8db0e4735f0e-kube-api-access-grh2l\") pod \"heat-cfnapi-7b778675bb-bfk2b\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.133444 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b2052cd2-0348-4c12-ac00-b221c4fa8dcc" podUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.286976 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.313076 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.313340 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="ceilometer-central-agent" containerID="cri-o://c88bf2ba39592659f642aba45bbedd6f4bbd7046240427294653f66e4b94df02" gracePeriod=30 Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.314077 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="proxy-httpd" containerID="cri-o://9f28bc1ec3cb2644230c796092dbaa4cd8a5bbf62d94459a7e416f72d9e439a7" gracePeriod=30 Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.314157 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="sg-core" containerID="cri-o://ddca4c2c0d3cb1b55ca5d741454911767e00bdadd65ad40a2067cba0166644ad" gracePeriod=30 Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.314190 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="ceilometer-notification-agent" containerID="cri-o://c041edd72ce2c13278ce43c4dc0129671f9fa55433e0a9d55bb9a1781b8cec87" gracePeriod=30 Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.328936 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.363968 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.366387 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 25 12:33:11 crc kubenswrapper[4688]: I1125 12:33:11.555091 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6885f88968-shb6s"] Nov 25 12:33:11 crc kubenswrapper[4688]: W1125 12:33:11.659916 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde20cc3c_3de9_4e8a_97ba_203205cbb278.slice/crio-6f2fba3cfdb2428c1a2c85b33a80013fdf6664742d0258eb89fab1490a667509 WatchSource:0}: Error finding container 6f2fba3cfdb2428c1a2c85b33a80013fdf6664742d0258eb89fab1490a667509: Status 404 returned error can't find the container with id 6f2fba3cfdb2428c1a2c85b33a80013fdf6664742d0258eb89fab1490a667509 Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.001318 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.005951 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85ddfb974d-m4b6g" Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.143136 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zbs4x"] Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.191472 4688 generic.go:334] "Generic (PLEG): container finished" podID="12a65ac1-2346-451a-a4cb-33286a015370" containerID="9f28bc1ec3cb2644230c796092dbaa4cd8a5bbf62d94459a7e416f72d9e439a7" exitCode=0 Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.192202 4688 generic.go:334] "Generic (PLEG): container finished" podID="12a65ac1-2346-451a-a4cb-33286a015370" containerID="ddca4c2c0d3cb1b55ca5d741454911767e00bdadd65ad40a2067cba0166644ad" exitCode=2 Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.192220 4688 generic.go:334] "Generic (PLEG): container finished" podID="12a65ac1-2346-451a-a4cb-33286a015370" containerID="c88bf2ba39592659f642aba45bbedd6f4bbd7046240427294653f66e4b94df02" exitCode=0 Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.191952 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerDied","Data":"9f28bc1ec3cb2644230c796092dbaa4cd8a5bbf62d94459a7e416f72d9e439a7"} Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.192416 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerDied","Data":"ddca4c2c0d3cb1b55ca5d741454911767e00bdadd65ad40a2067cba0166644ad"} Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.192436 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerDied","Data":"c88bf2ba39592659f642aba45bbedd6f4bbd7046240427294653f66e4b94df02"} Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.213788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6885f88968-shb6s" event={"ID":"de20cc3c-3de9-4e8a-97ba-203205cbb278","Type":"ContainerStarted","Data":"6f2fba3cfdb2428c1a2c85b33a80013fdf6664742d0258eb89fab1490a667509"} Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.378643 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.434493 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f7cb49745-qm48p"] Nov 25 12:33:12 crc kubenswrapper[4688]: W1125 12:33:12.442833 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6212668_8e7b_4cd0_8e4c_c1de46191e97.slice/crio-44d0c2b3710555ff141827bfca5d5140e1fed28d83fcd355f6297783f37ab092 WatchSource:0}: Error finding container 44d0c2b3710555ff141827bfca5d5140e1fed28d83fcd355f6297783f37ab092: Status 404 returned error can't find the container with id 44d0c2b3710555ff141827bfca5d5140e1fed28d83fcd355f6297783f37ab092 Nov 25 12:33:12 crc kubenswrapper[4688]: I1125 12:33:12.552255 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b778675bb-bfk2b"] Nov 25 12:33:13 crc kubenswrapper[4688]: I1125 12:33:13.238256 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6885f88968-shb6s" event={"ID":"de20cc3c-3de9-4e8a-97ba-203205cbb278","Type":"ContainerStarted","Data":"7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6"} Nov 25 12:33:13 crc kubenswrapper[4688]: I1125 12:33:13.239728 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:13 crc kubenswrapper[4688]: I1125 12:33:13.248387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f7cb49745-qm48p" event={"ID":"a6212668-8e7b-4cd0-8e4c-c1de46191e97","Type":"ContainerStarted","Data":"44d0c2b3710555ff141827bfca5d5140e1fed28d83fcd355f6297783f37ab092"} Nov 25 12:33:13 crc kubenswrapper[4688]: I1125 12:33:13.253059 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" event={"ID":"e75ca9ca-8146-4469-bab4-8db0e4735f0e","Type":"ContainerStarted","Data":"0324ac0f3133b4b4e3c7cbf926425540b74ca8d952baea442de7c71add89b6f5"} Nov 25 12:33:13 crc kubenswrapper[4688]: I1125 12:33:13.256232 4688 generic.go:334] "Generic (PLEG): container finished" podID="32b7e898-1175-4763-b18b-cf98c2ca0982" containerID="9d39dc6cb53aa456c012d57aaaa6596957e6730f1569174cc9cfe207b53423e3" exitCode=0 Nov 25 12:33:13 crc kubenswrapper[4688]: I1125 12:33:13.256283 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" event={"ID":"32b7e898-1175-4763-b18b-cf98c2ca0982","Type":"ContainerDied","Data":"9d39dc6cb53aa456c012d57aaaa6596957e6730f1569174cc9cfe207b53423e3"} Nov 25 12:33:13 crc kubenswrapper[4688]: I1125 12:33:13.256310 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" event={"ID":"32b7e898-1175-4763-b18b-cf98c2ca0982","Type":"ContainerStarted","Data":"047287faa3ba3f15103ca3816944975cd4007291ea14347f3ee0a1d65b7803aa"} Nov 25 12:33:13 crc kubenswrapper[4688]: I1125 12:33:13.282863 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6885f88968-shb6s" podStartSLOduration=3.282836603 podStartE2EDuration="3.282836603s" podCreationTimestamp="2025-11-25 12:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:13.257962825 +0000 UTC m=+1143.367591693" watchObservedRunningTime="2025-11-25 12:33:13.282836603 +0000 UTC m=+1143.392465491" Nov 25 12:33:14 crc kubenswrapper[4688]: I1125 12:33:14.279513 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" event={"ID":"32b7e898-1175-4763-b18b-cf98c2ca0982","Type":"ContainerStarted","Data":"41f775b9ff96d73f5aaaa03f8c2e09501b96fa29350d3912ff43c86136c3b654"} Nov 25 12:33:14 crc kubenswrapper[4688]: I1125 12:33:14.279880 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.315300 4688 generic.go:334] "Generic (PLEG): container finished" podID="12a65ac1-2346-451a-a4cb-33286a015370" containerID="c041edd72ce2c13278ce43c4dc0129671f9fa55433e0a9d55bb9a1781b8cec87" exitCode=0 Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.316688 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerDied","Data":"c041edd72ce2c13278ce43c4dc0129671f9fa55433e0a9d55bb9a1781b8cec87"} Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.478847 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.515398 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" podStartSLOduration=5.515376283 podStartE2EDuration="5.515376283s" podCreationTimestamp="2025-11-25 12:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:14.313829583 +0000 UTC m=+1144.423458461" watchObservedRunningTime="2025-11-25 12:33:15.515376283 +0000 UTC m=+1145.625005151" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.625004 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6f7j\" (UniqueName: \"kubernetes.io/projected/12a65ac1-2346-451a-a4cb-33286a015370-kube-api-access-j6f7j\") pod \"12a65ac1-2346-451a-a4cb-33286a015370\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.625158 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-sg-core-conf-yaml\") pod \"12a65ac1-2346-451a-a4cb-33286a015370\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.625210 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-log-httpd\") pod \"12a65ac1-2346-451a-a4cb-33286a015370\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.625264 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-run-httpd\") pod \"12a65ac1-2346-451a-a4cb-33286a015370\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.625311 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-config-data\") pod \"12a65ac1-2346-451a-a4cb-33286a015370\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.625346 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-scripts\") pod \"12a65ac1-2346-451a-a4cb-33286a015370\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.625411 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-combined-ca-bundle\") pod \"12a65ac1-2346-451a-a4cb-33286a015370\" (UID: \"12a65ac1-2346-451a-a4cb-33286a015370\") " Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.628040 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "12a65ac1-2346-451a-a4cb-33286a015370" (UID: "12a65ac1-2346-451a-a4cb-33286a015370"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.628374 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "12a65ac1-2346-451a-a4cb-33286a015370" (UID: "12a65ac1-2346-451a-a4cb-33286a015370"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.631998 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a65ac1-2346-451a-a4cb-33286a015370-kube-api-access-j6f7j" (OuterVolumeSpecName: "kube-api-access-j6f7j") pod "12a65ac1-2346-451a-a4cb-33286a015370" (UID: "12a65ac1-2346-451a-a4cb-33286a015370"). InnerVolumeSpecName "kube-api-access-j6f7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.637664 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-scripts" (OuterVolumeSpecName: "scripts") pod "12a65ac1-2346-451a-a4cb-33286a015370" (UID: "12a65ac1-2346-451a-a4cb-33286a015370"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.684435 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "12a65ac1-2346-451a-a4cb-33286a015370" (UID: "12a65ac1-2346-451a-a4cb-33286a015370"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.728345 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.728921 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.728986 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6f7j\" (UniqueName: \"kubernetes.io/projected/12a65ac1-2346-451a-a4cb-33286a015370-kube-api-access-j6f7j\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.729042 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.729093 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12a65ac1-2346-451a-a4cb-33286a015370-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.799201 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12a65ac1-2346-451a-a4cb-33286a015370" (UID: "12a65ac1-2346-451a-a4cb-33286a015370"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.830631 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.840903 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-config-data" (OuterVolumeSpecName: "config-data") pod "12a65ac1-2346-451a-a4cb-33286a015370" (UID: "12a65ac1-2346-451a-a4cb-33286a015370"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:15 crc kubenswrapper[4688]: I1125 12:33:15.932637 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12a65ac1-2346-451a-a4cb-33286a015370-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.338706 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12a65ac1-2346-451a-a4cb-33286a015370","Type":"ContainerDied","Data":"14b4c7c265b6cbf7146b46f7592244383fea1e00c590da4da200155b952d83c1"} Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.338768 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.338784 4688 scope.go:117] "RemoveContainer" containerID="9f28bc1ec3cb2644230c796092dbaa4cd8a5bbf62d94459a7e416f72d9e439a7" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.342013 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f7cb49745-qm48p" event={"ID":"a6212668-8e7b-4cd0-8e4c-c1de46191e97","Type":"ContainerStarted","Data":"61707952e0d13b176b5cdda1686051c2fa6821b1d18937609e42a7f1c37a5576"} Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.343033 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.355000 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" event={"ID":"e75ca9ca-8146-4469-bab4-8db0e4735f0e","Type":"ContainerStarted","Data":"e2a6c2b1bb6fc407f74c1f54b8b98710f2196d394bcf48d2028e24a13f5623e6"} Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.355846 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.370333 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-f7cb49745-qm48p" podStartSLOduration=3.768960599 podStartE2EDuration="6.370303955s" podCreationTimestamp="2025-11-25 12:33:10 +0000 UTC" firstStartedPulling="2025-11-25 12:33:12.447147318 +0000 UTC m=+1142.556776186" lastFinishedPulling="2025-11-25 12:33:15.048490684 +0000 UTC m=+1145.158119542" observedRunningTime="2025-11-25 12:33:16.365704771 +0000 UTC m=+1146.475333639" watchObservedRunningTime="2025-11-25 12:33:16.370303955 +0000 UTC m=+1146.479932823" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.376197 4688 scope.go:117] "RemoveContainer" containerID="ddca4c2c0d3cb1b55ca5d741454911767e00bdadd65ad40a2067cba0166644ad" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.392874 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.401171 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.404901 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" podStartSLOduration=3.957480223 podStartE2EDuration="6.404881434s" podCreationTimestamp="2025-11-25 12:33:10 +0000 UTC" firstStartedPulling="2025-11-25 12:33:12.600708493 +0000 UTC m=+1142.710337361" lastFinishedPulling="2025-11-25 12:33:15.048109704 +0000 UTC m=+1145.157738572" observedRunningTime="2025-11-25 12:33:16.397760562 +0000 UTC m=+1146.507389430" watchObservedRunningTime="2025-11-25 12:33:16.404881434 +0000 UTC m=+1146.514510302" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.408137 4688 scope.go:117] "RemoveContainer" containerID="c041edd72ce2c13278ce43c4dc0129671f9fa55433e0a9d55bb9a1781b8cec87" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.434467 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:16 crc kubenswrapper[4688]: E1125 12:33:16.435079 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="proxy-httpd" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.435119 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="proxy-httpd" Nov 25 12:33:16 crc kubenswrapper[4688]: E1125 12:33:16.435135 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="sg-core" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.435144 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="sg-core" Nov 25 12:33:16 crc kubenswrapper[4688]: E1125 12:33:16.435155 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="ceilometer-notification-agent" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.435164 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="ceilometer-notification-agent" Nov 25 12:33:16 crc kubenswrapper[4688]: E1125 12:33:16.435202 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="ceilometer-central-agent" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.435210 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="ceilometer-central-agent" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.435461 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="ceilometer-notification-agent" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.435493 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="ceilometer-central-agent" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.435511 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="sg-core" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.435547 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a65ac1-2346-451a-a4cb-33286a015370" containerName="proxy-httpd" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.437511 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.441282 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.443214 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.444490 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.444829 4688 scope.go:117] "RemoveContainer" containerID="c88bf2ba39592659f642aba45bbedd6f4bbd7046240427294653f66e4b94df02" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.543339 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.543445 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-config-data\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.543492 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-log-httpd\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.543559 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-scripts\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.543583 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6t7l\" (UniqueName: \"kubernetes.io/projected/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-kube-api-access-h6t7l\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.543605 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-run-httpd\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.543647 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.644781 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-log-httpd\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.644862 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-scripts\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.644885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6t7l\" (UniqueName: \"kubernetes.io/projected/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-kube-api-access-h6t7l\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.644908 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-run-httpd\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.644940 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.644975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.645024 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-config-data\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.645577 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-log-httpd\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.645704 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-run-httpd\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.650211 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-scripts\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.658487 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-config-data\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.658474 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.663151 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.671297 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6t7l\" (UniqueName: \"kubernetes.io/projected/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-kube-api-access-h6t7l\") pod \"ceilometer-0\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " pod="openstack/ceilometer-0" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.762062 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a65ac1-2346-451a-a4cb-33286a015370" path="/var/lib/kubelet/pods/12a65ac1-2346-451a-a4cb-33286a015370/volumes" Nov 25 12:33:16 crc kubenswrapper[4688]: I1125 12:33:16.775312 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.267238 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.377512 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerStarted","Data":"8cf5a29feb8750e8680e48f0b5365c221c24760f0277cae70ce64804cb3cf0d3"} Nov 25 12:33:17 crc kubenswrapper[4688]: E1125 12:33:17.591555 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2052cd2_0348_4c12_ac00_b221c4fa8dcc.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.624035 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.783478 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-ff9c99746-zhh6h"] Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.785394 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.805995 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-ff9c99746-zhh6h"] Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.876883 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-combined-ca-bundle\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.877319 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-config-data\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.877359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-config-data-custom\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.877391 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7sjl\" (UniqueName: \"kubernetes.io/projected/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-kube-api-access-m7sjl\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.883644 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.887034 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.897383 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78f5f9ff74-htn8s"] Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.898979 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.914141 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5c9cbfcdb-bk4g8"] Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.915906 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.930401 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78f5f9ff74-htn8s"] Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.944721 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c9cbfcdb-bk4g8"] Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.979537 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-config-data\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.980155 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-combined-ca-bundle\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.980197 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-combined-ca-bundle\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.980259 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-config-data-custom\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.980339 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7sjl\" (UniqueName: \"kubernetes.io/projected/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-kube-api-access-m7sjl\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.980944 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.981052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data-custom\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.981184 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpgfz\" (UniqueName: \"kubernetes.io/projected/55e36d00-8496-4835-a57c-2cae27092645-kube-api-access-xpgfz\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.981262 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data-custom\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.981297 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fvf\" (UniqueName: \"kubernetes.io/projected/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-kube-api-access-55fvf\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.982045 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.982099 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-combined-ca-bundle\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:17 crc kubenswrapper[4688]: I1125 12:33:17.988805 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-config-data-custom\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.000897 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-config-data\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.006635 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7sjl\" (UniqueName: \"kubernetes.io/projected/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-kube-api-access-m7sjl\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.018055 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed213d5-b2be-4cf1-8416-1ec71b9bb32c-combined-ca-bundle\") pod \"heat-engine-ff9c99746-zhh6h\" (UID: \"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c\") " pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.084450 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.084586 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-combined-ca-bundle\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.084616 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-combined-ca-bundle\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.084691 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.084742 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data-custom\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.084791 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpgfz\" (UniqueName: \"kubernetes.io/projected/55e36d00-8496-4835-a57c-2cae27092645-kube-api-access-xpgfz\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.084824 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data-custom\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.084844 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55fvf\" (UniqueName: \"kubernetes.io/projected/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-kube-api-access-55fvf\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.098238 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-combined-ca-bundle\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.099674 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data-custom\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.100687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data-custom\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.101426 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.103800 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-combined-ca-bundle\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.107562 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.108545 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpgfz\" (UniqueName: \"kubernetes.io/projected/55e36d00-8496-4835-a57c-2cae27092645-kube-api-access-xpgfz\") pod \"heat-api-78f5f9ff74-htn8s\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.109144 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.111788 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fvf\" (UniqueName: \"kubernetes.io/projected/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-kube-api-access-55fvf\") pod \"heat-cfnapi-5c9cbfcdb-bk4g8\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.181411 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.231310 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.259156 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.461243 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerStarted","Data":"1c0b82d9acbe647f35357bf526612ef6bff68e115f38bece75cf66375e5a2752"} Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.756670 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-ff9c99746-zhh6h"] Nov 25 12:33:18 crc kubenswrapper[4688]: I1125 12:33:18.878825 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c9cbfcdb-bk4g8"] Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.005718 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78f5f9ff74-htn8s"] Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.476272 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-ff9c99746-zhh6h" event={"ID":"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c","Type":"ContainerStarted","Data":"f93ac5616fc7e85c892cb6c158a7bff997b858607ae15aa9a0c2119807ca94de"} Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.476645 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-ff9c99746-zhh6h" event={"ID":"6ed213d5-b2be-4cf1-8416-1ec71b9bb32c","Type":"ContainerStarted","Data":"595cc1ab717e1b2468bc58ccb7250a0636210af3503fe84612e90dac5daf3354"} Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.476668 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.480170 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerStarted","Data":"0cbbb336c486796eebc1fabe1a442d44297859a15c850e48808f250410a23ae7"} Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.482242 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f5f9ff74-htn8s" event={"ID":"55e36d00-8496-4835-a57c-2cae27092645","Type":"ContainerStarted","Data":"73c0292541df633ab61111a20d19b8172a15e907e37996694b5b63083fbe0390"} Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.482290 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f5f9ff74-htn8s" event={"ID":"55e36d00-8496-4835-a57c-2cae27092645","Type":"ContainerStarted","Data":"e7b96ecf6ccda463a06be941c9bff9aeb9173bd24ad65b2965f4aa8a55eb5c3c"} Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.482327 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.484164 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" event={"ID":"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b","Type":"ContainerStarted","Data":"1c338bcdd400297239172be3db47067b4c93d24a99f0708e25c5a999c221912d"} Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.484192 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" event={"ID":"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b","Type":"ContainerStarted","Data":"a31ae7ca324d2275961d7ca7b8c02d00250ae7daab6f5ac050cb991bc6285950"} Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.484473 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.494569 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-ff9c99746-zhh6h" podStartSLOduration=2.494550244 podStartE2EDuration="2.494550244s" podCreationTimestamp="2025-11-25 12:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:19.488977384 +0000 UTC m=+1149.598606242" watchObservedRunningTime="2025-11-25 12:33:19.494550244 +0000 UTC m=+1149.604179112" Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.512933 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" podStartSLOduration=2.512911497 podStartE2EDuration="2.512911497s" podCreationTimestamp="2025-11-25 12:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:19.508200231 +0000 UTC m=+1149.617829109" watchObservedRunningTime="2025-11-25 12:33:19.512911497 +0000 UTC m=+1149.622540365" Nov 25 12:33:19 crc kubenswrapper[4688]: I1125 12:33:19.528030 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78f5f9ff74-htn8s" podStartSLOduration=2.528013373 podStartE2EDuration="2.528013373s" podCreationTimestamp="2025-11-25 12:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:19.527352725 +0000 UTC m=+1149.636981593" watchObservedRunningTime="2025-11-25 12:33:19.528013373 +0000 UTC m=+1149.637642241" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.053289 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b778675bb-bfk2b"] Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.054996 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" podUID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" containerName="heat-cfnapi" containerID="cri-o://e2a6c2b1bb6fc407f74c1f54b8b98710f2196d394bcf48d2028e24a13f5623e6" gracePeriod=60 Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.066448 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-f7cb49745-qm48p"] Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.066964 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-f7cb49745-qm48p" podUID="a6212668-8e7b-4cd0-8e4c-c1de46191e97" containerName="heat-api" containerID="cri-o://61707952e0d13b176b5cdda1686051c2fa6821b1d18937609e42a7f1c37a5576" gracePeriod=60 Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.087139 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-f7cb49745-qm48p" podUID="a6212668-8e7b-4cd0-8e4c-c1de46191e97" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.174:8004/healthcheck\": EOF" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.099403 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7f444b957c-4fqdt"] Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.101090 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.105806 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.106099 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.111177 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f444b957c-4fqdt"] Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.124491 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7d45cc658d-s47zc"] Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.147005 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.157839 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.158082 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.195428 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d45cc658d-s47zc"] Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251423 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-public-tls-certs\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251479 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-combined-ca-bundle\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251548 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-config-data\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251614 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-config-data\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251640 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-internal-tls-certs\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251665 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-config-data-custom\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251692 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-public-tls-certs\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251717 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkg8m\" (UniqueName: \"kubernetes.io/projected/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-kube-api-access-zkg8m\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251755 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-config-data-custom\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm8w7\" (UniqueName: \"kubernetes.io/projected/1efc3d47-fd73-4e3c-9357-5fd608383972-kube-api-access-wm8w7\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.251950 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-internal-tls-certs\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.252002 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-combined-ca-bundle\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.353961 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-config-data\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.354032 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-internal-tls-certs\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.354085 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-config-data-custom\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.354117 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-public-tls-certs\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.354145 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkg8m\" (UniqueName: \"kubernetes.io/projected/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-kube-api-access-zkg8m\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.355053 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-config-data-custom\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.355118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm8w7\" (UniqueName: \"kubernetes.io/projected/1efc3d47-fd73-4e3c-9357-5fd608383972-kube-api-access-wm8w7\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.355250 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-internal-tls-certs\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.355304 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-combined-ca-bundle\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.355331 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-public-tls-certs\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.355363 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-combined-ca-bundle\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.355422 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-config-data\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.364291 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-config-data-custom\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.364402 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-public-tls-certs\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.365034 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-internal-tls-certs\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.370595 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-combined-ca-bundle\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.372631 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-config-data-custom\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.372860 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-public-tls-certs\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.373595 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-combined-ca-bundle\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.373938 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-internal-tls-certs\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.374278 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkg8m\" (UniqueName: \"kubernetes.io/projected/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-kube-api-access-zkg8m\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.376153 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d1502-0fe9-44cb-af7e-8466e27f75d4-config-data\") pod \"heat-api-7d45cc658d-s47zc\" (UID: \"ab8d1502-0fe9-44cb-af7e-8466e27f75d4\") " pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.380735 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1efc3d47-fd73-4e3c-9357-5fd608383972-config-data\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.385347 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm8w7\" (UniqueName: \"kubernetes.io/projected/1efc3d47-fd73-4e3c-9357-5fd608383972-kube-api-access-wm8w7\") pod \"heat-cfnapi-7f444b957c-4fqdt\" (UID: \"1efc3d47-fd73-4e3c-9357-5fd608383972\") " pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.425227 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.476941 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.529249 4688 generic.go:334] "Generic (PLEG): container finished" podID="55e36d00-8496-4835-a57c-2cae27092645" containerID="73c0292541df633ab61111a20d19b8172a15e907e37996694b5b63083fbe0390" exitCode=1 Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.529331 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f5f9ff74-htn8s" event={"ID":"55e36d00-8496-4835-a57c-2cae27092645","Type":"ContainerDied","Data":"73c0292541df633ab61111a20d19b8172a15e907e37996694b5b63083fbe0390"} Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.530128 4688 scope.go:117] "RemoveContainer" containerID="73c0292541df633ab61111a20d19b8172a15e907e37996694b5b63083fbe0390" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.541025 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" podUID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.172:8000/healthcheck\": read tcp 10.217.0.2:46136->10.217.0.172:8000: read: connection reset by peer" Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.542147 4688 generic.go:334] "Generic (PLEG): container finished" podID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" containerID="1c338bcdd400297239172be3db47067b4c93d24a99f0708e25c5a999c221912d" exitCode=1 Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.542216 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" event={"ID":"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b","Type":"ContainerDied","Data":"1c338bcdd400297239172be3db47067b4c93d24a99f0708e25c5a999c221912d"} Nov 25 12:33:20 crc kubenswrapper[4688]: I1125 12:33:20.542726 4688 scope.go:117] "RemoveContainer" containerID="1c338bcdd400297239172be3db47067b4c93d24a99f0708e25c5a999c221912d" Nov 25 12:33:21 crc kubenswrapper[4688]: I1125 12:33:21.289511 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:33:21 crc kubenswrapper[4688]: I1125 12:33:21.330276 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" podUID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.172:8000/healthcheck\": dial tcp 10.217.0.172:8000: connect: connection refused" Nov 25 12:33:21 crc kubenswrapper[4688]: I1125 12:33:21.369865 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bd78x"] Nov 25 12:33:21 crc kubenswrapper[4688]: I1125 12:33:21.370127 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" podUID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerName="dnsmasq-dns" containerID="cri-o://6353734e7aae2f0844dc5ae9d6a1094b4ae12f5d548df5c61d41f9ffde3fad62" gracePeriod=10 Nov 25 12:33:21 crc kubenswrapper[4688]: I1125 12:33:21.555883 4688 generic.go:334] "Generic (PLEG): container finished" podID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" containerID="e2a6c2b1bb6fc407f74c1f54b8b98710f2196d394bcf48d2028e24a13f5623e6" exitCode=0 Nov 25 12:33:21 crc kubenswrapper[4688]: I1125 12:33:21.555922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" event={"ID":"e75ca9ca-8146-4469-bab4-8db0e4735f0e","Type":"ContainerDied","Data":"e2a6c2b1bb6fc407f74c1f54b8b98710f2196d394bcf48d2028e24a13f5623e6"} Nov 25 12:33:22 crc kubenswrapper[4688]: I1125 12:33:22.241627 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" podUID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Nov 25 12:33:22 crc kubenswrapper[4688]: I1125 12:33:22.567012 4688 generic.go:334] "Generic (PLEG): container finished" podID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerID="6353734e7aae2f0844dc5ae9d6a1094b4ae12f5d548df5c61d41f9ffde3fad62" exitCode=0 Nov 25 12:33:22 crc kubenswrapper[4688]: I1125 12:33:22.567064 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" event={"ID":"4bc73c38-2fd7-464c-99a3-4fb5fea684c8","Type":"ContainerDied","Data":"6353734e7aae2f0844dc5ae9d6a1094b4ae12f5d548df5c61d41f9ffde3fad62"} Nov 25 12:33:23 crc kubenswrapper[4688]: I1125 12:33:23.182777 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:23 crc kubenswrapper[4688]: I1125 12:33:23.231640 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:25 crc kubenswrapper[4688]: I1125 12:33:25.471635 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-f7cb49745-qm48p" podUID="a6212668-8e7b-4cd0-8e4c-c1de46191e97" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.174:8004/healthcheck\": read tcp 10.217.0.2:42624->10.217.0.174:8004: read: connection reset by peer" Nov 25 12:33:25 crc kubenswrapper[4688]: I1125 12:33:25.595556 4688 generic.go:334] "Generic (PLEG): container finished" podID="a6212668-8e7b-4cd0-8e4c-c1de46191e97" containerID="61707952e0d13b176b5cdda1686051c2fa6821b1d18937609e42a7f1c37a5576" exitCode=0 Nov 25 12:33:25 crc kubenswrapper[4688]: I1125 12:33:25.595609 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f7cb49745-qm48p" event={"ID":"a6212668-8e7b-4cd0-8e4c-c1de46191e97","Type":"ContainerDied","Data":"61707952e0d13b176b5cdda1686051c2fa6821b1d18937609e42a7f1c37a5576"} Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.303544 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.398346 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data\") pod \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.398433 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data-custom\") pod \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.398452 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbzgj\" (UniqueName: \"kubernetes.io/projected/a6212668-8e7b-4cd0-8e4c-c1de46191e97-kube-api-access-hbzgj\") pod \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.398578 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-combined-ca-bundle\") pod \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\" (UID: \"a6212668-8e7b-4cd0-8e4c-c1de46191e97\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.411755 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6212668-8e7b-4cd0-8e4c-c1de46191e97-kube-api-access-hbzgj" (OuterVolumeSpecName: "kube-api-access-hbzgj") pod "a6212668-8e7b-4cd0-8e4c-c1de46191e97" (UID: "a6212668-8e7b-4cd0-8e4c-c1de46191e97"). InnerVolumeSpecName "kube-api-access-hbzgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.412252 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a6212668-8e7b-4cd0-8e4c-c1de46191e97" (UID: "a6212668-8e7b-4cd0-8e4c-c1de46191e97"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.451302 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.466117 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6212668-8e7b-4cd0-8e4c-c1de46191e97" (UID: "a6212668-8e7b-4cd0-8e4c-c1de46191e97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.499312 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data\") pod \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.499459 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data-custom\") pod \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.499492 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-combined-ca-bundle\") pod \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.499561 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grh2l\" (UniqueName: \"kubernetes.io/projected/e75ca9ca-8146-4469-bab4-8db0e4735f0e-kube-api-access-grh2l\") pod \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\" (UID: \"e75ca9ca-8146-4469-bab4-8db0e4735f0e\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.499937 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.499954 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbzgj\" (UniqueName: \"kubernetes.io/projected/a6212668-8e7b-4cd0-8e4c-c1de46191e97-kube-api-access-hbzgj\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.499967 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.521822 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75ca9ca-8146-4469-bab4-8db0e4735f0e-kube-api-access-grh2l" (OuterVolumeSpecName: "kube-api-access-grh2l") pod "e75ca9ca-8146-4469-bab4-8db0e4735f0e" (UID: "e75ca9ca-8146-4469-bab4-8db0e4735f0e"). InnerVolumeSpecName "kube-api-access-grh2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.527806 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e75ca9ca-8146-4469-bab4-8db0e4735f0e" (UID: "e75ca9ca-8146-4469-bab4-8db0e4735f0e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.529730 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data" (OuterVolumeSpecName: "config-data") pod "a6212668-8e7b-4cd0-8e4c-c1de46191e97" (UID: "a6212668-8e7b-4cd0-8e4c-c1de46191e97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.538794 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.542617 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e75ca9ca-8146-4469-bab4-8db0e4735f0e" (UID: "e75ca9ca-8146-4469-bab4-8db0e4735f0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.601852 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.601891 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.601904 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grh2l\" (UniqueName: \"kubernetes.io/projected/e75ca9ca-8146-4469-bab4-8db0e4735f0e-kube-api-access-grh2l\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.601918 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6212668-8e7b-4cd0-8e4c-c1de46191e97-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.641672 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f444b957c-4fqdt"] Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.644064 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerStarted","Data":"72030925f121a9025397fa04c3da9b472b7212412d88e6511af79e5af6638f1a"} Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.645434 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data" (OuterVolumeSpecName: "config-data") pod "e75ca9ca-8146-4469-bab4-8db0e4735f0e" (UID: "e75ca9ca-8146-4469-bab4-8db0e4735f0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.646031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" event={"ID":"e75ca9ca-8146-4469-bab4-8db0e4735f0e","Type":"ContainerDied","Data":"0324ac0f3133b4b4e3c7cbf926425540b74ca8d952baea442de7c71add89b6f5"} Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.646067 4688 scope.go:117] "RemoveContainer" containerID="e2a6c2b1bb6fc407f74c1f54b8b98710f2196d394bcf48d2028e24a13f5623e6" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.646183 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.661262 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" event={"ID":"4bc73c38-2fd7-464c-99a3-4fb5fea684c8","Type":"ContainerDied","Data":"b082472ff2b4610ba5315dfa4a83a87b3029b422701320e2171f90d852f0e7c5"} Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.661346 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bd78x" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.675248 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f5f9ff74-htn8s" event={"ID":"55e36d00-8496-4835-a57c-2cae27092645","Type":"ContainerStarted","Data":"8797f20742e92bfed2d0fbba6930da675732bef989cbde4384bbca9fd531e815"} Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.675292 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.689777 4688 scope.go:117] "RemoveContainer" containerID="6353734e7aae2f0844dc5ae9d6a1094b4ae12f5d548df5c61d41f9ffde3fad62" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.694933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f7cb49745-qm48p" event={"ID":"a6212668-8e7b-4cd0-8e4c-c1de46191e97","Type":"ContainerDied","Data":"44d0c2b3710555ff141827bfca5d5140e1fed28d83fcd355f6297783f37ab092"} Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.695101 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f7cb49745-qm48p" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.705232 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b778675bb-bfk2b"] Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.709214 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8sl9\" (UniqueName: \"kubernetes.io/projected/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-kube-api-access-h8sl9\") pod \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.709407 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-nb\") pod \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.709501 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-config\") pod \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.709561 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-sb\") pod \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.709686 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-svc\") pod \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.709742 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-swift-storage-0\") pod \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\" (UID: \"4bc73c38-2fd7-464c-99a3-4fb5fea684c8\") " Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.710653 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75ca9ca-8146-4469-bab4-8db0e4735f0e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.710752 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b778675bb-bfk2b"] Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.716694 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" event={"ID":"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b","Type":"ContainerStarted","Data":"6e7f89dc5e5a3cc333adc947b2aa8cd02b7e6adc4d027f70defefff584ced38b"} Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.722476 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.724863 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-kube-api-access-h8sl9" (OuterVolumeSpecName: "kube-api-access-h8sl9") pod "4bc73c38-2fd7-464c-99a3-4fb5fea684c8" (UID: "4bc73c38-2fd7-464c-99a3-4fb5fea684c8"). InnerVolumeSpecName "kube-api-access-h8sl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.758222 4688 scope.go:117] "RemoveContainer" containerID="5817c49d63c2e55225b8fbc2083fa30823bae14e95fdb912eae8d2327886ee80" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.769330 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" path="/var/lib/kubelet/pods/e75ca9ca-8146-4469-bab4-8db0e4735f0e/volumes" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.806936 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4bc73c38-2fd7-464c-99a3-4fb5fea684c8" (UID: "4bc73c38-2fd7-464c-99a3-4fb5fea684c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.811923 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8sl9\" (UniqueName: \"kubernetes.io/projected/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-kube-api-access-h8sl9\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.811958 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.816614 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-f7cb49745-qm48p"] Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.818864 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-config" (OuterVolumeSpecName: "config") pod "4bc73c38-2fd7-464c-99a3-4fb5fea684c8" (UID: "4bc73c38-2fd7-464c-99a3-4fb5fea684c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.828806 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-f7cb49745-qm48p"] Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.838637 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bc73c38-2fd7-464c-99a3-4fb5fea684c8" (UID: "4bc73c38-2fd7-464c-99a3-4fb5fea684c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.844845 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d45cc658d-s47zc"] Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.852865 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bc73c38-2fd7-464c-99a3-4fb5fea684c8" (UID: "4bc73c38-2fd7-464c-99a3-4fb5fea684c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.870161 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bc73c38-2fd7-464c-99a3-4fb5fea684c8" (UID: "4bc73c38-2fd7-464c-99a3-4fb5fea684c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:26 crc kubenswrapper[4688]: W1125 12:33:26.887863 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab8d1502_0fe9_44cb_af7e_8466e27f75d4.slice/crio-23b7041b5fa91d54d194c70f1e89cd42ead5053695c2df3c75ac16b4992b1dbd WatchSource:0}: Error finding container 23b7041b5fa91d54d194c70f1e89cd42ead5053695c2df3c75ac16b4992b1dbd: Status 404 returned error can't find the container with id 23b7041b5fa91d54d194c70f1e89cd42ead5053695c2df3c75ac16b4992b1dbd Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.914739 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.914792 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.914803 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:26 crc kubenswrapper[4688]: I1125 12:33:26.914814 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc73c38-2fd7-464c-99a3-4fb5fea684c8-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.129689 4688 scope.go:117] "RemoveContainer" containerID="61707952e0d13b176b5cdda1686051c2fa6821b1d18937609e42a7f1c37a5576" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.227747 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bd78x"] Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.236203 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bd78x"] Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.737718 4688 generic.go:334] "Generic (PLEG): container finished" podID="55e36d00-8496-4835-a57c-2cae27092645" containerID="8797f20742e92bfed2d0fbba6930da675732bef989cbde4384bbca9fd531e815" exitCode=1 Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.737786 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f5f9ff74-htn8s" event={"ID":"55e36d00-8496-4835-a57c-2cae27092645","Type":"ContainerDied","Data":"8797f20742e92bfed2d0fbba6930da675732bef989cbde4384bbca9fd531e815"} Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.738095 4688 scope.go:117] "RemoveContainer" containerID="73c0292541df633ab61111a20d19b8172a15e907e37996694b5b63083fbe0390" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.738441 4688 scope.go:117] "RemoveContainer" containerID="8797f20742e92bfed2d0fbba6930da675732bef989cbde4384bbca9fd531e815" Nov 25 12:33:27 crc kubenswrapper[4688]: E1125 12:33:27.738836 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-78f5f9ff74-htn8s_openstack(55e36d00-8496-4835-a57c-2cae27092645)\"" pod="openstack/heat-api-78f5f9ff74-htn8s" podUID="55e36d00-8496-4835-a57c-2cae27092645" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.741957 4688 generic.go:334] "Generic (PLEG): container finished" podID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" containerID="6e7f89dc5e5a3cc333adc947b2aa8cd02b7e6adc4d027f70defefff584ced38b" exitCode=1 Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.742027 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" event={"ID":"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b","Type":"ContainerDied","Data":"6e7f89dc5e5a3cc333adc947b2aa8cd02b7e6adc4d027f70defefff584ced38b"} Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.742645 4688 scope.go:117] "RemoveContainer" containerID="6e7f89dc5e5a3cc333adc947b2aa8cd02b7e6adc4d027f70defefff584ced38b" Nov 25 12:33:27 crc kubenswrapper[4688]: E1125 12:33:27.743861 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5c9cbfcdb-bk4g8_openstack(815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b)\"" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.757875 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d45cc658d-s47zc" event={"ID":"ab8d1502-0fe9-44cb-af7e-8466e27f75d4","Type":"ContainerStarted","Data":"1b542c5b9278d8e0c48c8b1d5db3f797b008ab450627fa8f3ed36a85b27988a1"} Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.757922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d45cc658d-s47zc" event={"ID":"ab8d1502-0fe9-44cb-af7e-8466e27f75d4","Type":"ContainerStarted","Data":"23b7041b5fa91d54d194c70f1e89cd42ead5053695c2df3c75ac16b4992b1dbd"} Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.758656 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.762043 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fe63e1cf-543e-46d0-a4f8-0144f2201219","Type":"ContainerStarted","Data":"67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c"} Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.769312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f444b957c-4fqdt" event={"ID":"1efc3d47-fd73-4e3c-9357-5fd608383972","Type":"ContainerStarted","Data":"348b32590a7ef3273d6c07d80bd92dd8f8b505c4f7c83a49da75022429f54d4a"} Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.769356 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f444b957c-4fqdt" event={"ID":"1efc3d47-fd73-4e3c-9357-5fd608383972","Type":"ContainerStarted","Data":"66f74f0fe6b98d85d17c2762812dc7f9f53d101c96744983d5d10a0f7f93e224"} Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.769570 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.786097 4688 scope.go:117] "RemoveContainer" containerID="1c338bcdd400297239172be3db47067b4c93d24a99f0708e25c5a999c221912d" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.801849 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.181800982 podStartE2EDuration="18.801831998s" podCreationTimestamp="2025-11-25 12:33:09 +0000 UTC" firstStartedPulling="2025-11-25 12:33:10.358639426 +0000 UTC m=+1140.468268294" lastFinishedPulling="2025-11-25 12:33:25.978670442 +0000 UTC m=+1156.088299310" observedRunningTime="2025-11-25 12:33:27.789739794 +0000 UTC m=+1157.899368662" watchObservedRunningTime="2025-11-25 12:33:27.801831998 +0000 UTC m=+1157.911460866" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.834933 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7d45cc658d-s47zc" podStartSLOduration=7.834909917 podStartE2EDuration="7.834909917s" podCreationTimestamp="2025-11-25 12:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:27.815801323 +0000 UTC m=+1157.925430191" watchObservedRunningTime="2025-11-25 12:33:27.834909917 +0000 UTC m=+1157.944538785" Nov 25 12:33:27 crc kubenswrapper[4688]: I1125 12:33:27.851546 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7f444b957c-4fqdt" podStartSLOduration=7.851511553 podStartE2EDuration="7.851511553s" podCreationTimestamp="2025-11-25 12:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:27.83354452 +0000 UTC m=+1157.943173388" watchObservedRunningTime="2025-11-25 12:33:27.851511553 +0000 UTC m=+1157.961140421" Nov 25 12:33:27 crc kubenswrapper[4688]: E1125 12:33:27.920230 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2052cd2_0348_4c12_ac00_b221c4fa8dcc.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.184587 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.232408 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.754149 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" path="/var/lib/kubelet/pods/4bc73c38-2fd7-464c-99a3-4fb5fea684c8/volumes" Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.755188 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6212668-8e7b-4cd0-8e4c-c1de46191e97" path="/var/lib/kubelet/pods/a6212668-8e7b-4cd0-8e4c-c1de46191e97/volumes" Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.787836 4688 scope.go:117] "RemoveContainer" containerID="6e7f89dc5e5a3cc333adc947b2aa8cd02b7e6adc4d027f70defefff584ced38b" Nov 25 12:33:28 crc kubenswrapper[4688]: E1125 12:33:28.788157 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5c9cbfcdb-bk4g8_openstack(815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b)\"" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.793753 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerStarted","Data":"f774a1faec289c33c0d287a9eefc90cae391208c668c240a06275b1ef4a53e22"} Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.793985 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="ceilometer-central-agent" containerID="cri-o://1c0b82d9acbe647f35357bf526612ef6bff68e115f38bece75cf66375e5a2752" gracePeriod=30 Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.794094 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.794143 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="proxy-httpd" containerID="cri-o://f774a1faec289c33c0d287a9eefc90cae391208c668c240a06275b1ef4a53e22" gracePeriod=30 Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.794189 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="sg-core" containerID="cri-o://72030925f121a9025397fa04c3da9b472b7212412d88e6511af79e5af6638f1a" gracePeriod=30 Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.794230 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="ceilometer-notification-agent" containerID="cri-o://0cbbb336c486796eebc1fabe1a442d44297859a15c850e48808f250410a23ae7" gracePeriod=30 Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.800942 4688 scope.go:117] "RemoveContainer" containerID="8797f20742e92bfed2d0fbba6930da675732bef989cbde4384bbca9fd531e815" Nov 25 12:33:28 crc kubenswrapper[4688]: E1125 12:33:28.801341 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-78f5f9ff74-htn8s_openstack(55e36d00-8496-4835-a57c-2cae27092645)\"" pod="openstack/heat-api-78f5f9ff74-htn8s" podUID="55e36d00-8496-4835-a57c-2cae27092645" Nov 25 12:33:28 crc kubenswrapper[4688]: I1125 12:33:28.861648 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.65701557 podStartE2EDuration="12.861621192s" podCreationTimestamp="2025-11-25 12:33:16 +0000 UTC" firstStartedPulling="2025-11-25 12:33:17.281683692 +0000 UTC m=+1147.391312560" lastFinishedPulling="2025-11-25 12:33:27.486289314 +0000 UTC m=+1157.595918182" observedRunningTime="2025-11-25 12:33:28.857155492 +0000 UTC m=+1158.966784360" watchObservedRunningTime="2025-11-25 12:33:28.861621192 +0000 UTC m=+1158.971250060" Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.812333 4688 generic.go:334] "Generic (PLEG): container finished" podID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerID="f774a1faec289c33c0d287a9eefc90cae391208c668c240a06275b1ef4a53e22" exitCode=0 Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.812911 4688 generic.go:334] "Generic (PLEG): container finished" podID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerID="72030925f121a9025397fa04c3da9b472b7212412d88e6511af79e5af6638f1a" exitCode=2 Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.812922 4688 generic.go:334] "Generic (PLEG): container finished" podID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerID="0cbbb336c486796eebc1fabe1a442d44297859a15c850e48808f250410a23ae7" exitCode=0 Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.812932 4688 generic.go:334] "Generic (PLEG): container finished" podID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerID="1c0b82d9acbe647f35357bf526612ef6bff68e115f38bece75cf66375e5a2752" exitCode=0 Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.812400 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerDied","Data":"f774a1faec289c33c0d287a9eefc90cae391208c668c240a06275b1ef4a53e22"} Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.812980 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerDied","Data":"72030925f121a9025397fa04c3da9b472b7212412d88e6511af79e5af6638f1a"} Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.812998 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerDied","Data":"0cbbb336c486796eebc1fabe1a442d44297859a15c850e48808f250410a23ae7"} Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.813015 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerDied","Data":"1c0b82d9acbe647f35357bf526612ef6bff68e115f38bece75cf66375e5a2752"} Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.813675 4688 scope.go:117] "RemoveContainer" containerID="8797f20742e92bfed2d0fbba6930da675732bef989cbde4384bbca9fd531e815" Nov 25 12:33:29 crc kubenswrapper[4688]: I1125 12:33:29.813757 4688 scope.go:117] "RemoveContainer" containerID="6e7f89dc5e5a3cc333adc947b2aa8cd02b7e6adc4d027f70defefff584ced38b" Nov 25 12:33:29 crc kubenswrapper[4688]: E1125 12:33:29.813872 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-78f5f9ff74-htn8s_openstack(55e36d00-8496-4835-a57c-2cae27092645)\"" pod="openstack/heat-api-78f5f9ff74-htn8s" podUID="55e36d00-8496-4835-a57c-2cae27092645" Nov 25 12:33:29 crc kubenswrapper[4688]: E1125 12:33:29.813993 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5c9cbfcdb-bk4g8_openstack(815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b)\"" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" Nov 25 12:33:30 crc kubenswrapper[4688]: I1125 12:33:30.823127 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15fc61ae-3bf9-4d1b-80a6-19e9745988ee","Type":"ContainerDied","Data":"8cf5a29feb8750e8680e48f0b5365c221c24760f0277cae70ce64804cb3cf0d3"} Nov 25 12:33:30 crc kubenswrapper[4688]: I1125 12:33:30.823396 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cf5a29feb8750e8680e48f0b5365c221c24760f0277cae70ce64804cb3cf0d3" Nov 25 12:33:30 crc kubenswrapper[4688]: I1125 12:33:30.855080 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.007654 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-sg-core-conf-yaml\") pod \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.007737 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-run-httpd\") pod \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.007771 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-combined-ca-bundle\") pod \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.007813 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-log-httpd\") pod \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.007872 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-config-data\") pod \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.007909 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-scripts\") pod \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.008043 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6t7l\" (UniqueName: \"kubernetes.io/projected/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-kube-api-access-h6t7l\") pod \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\" (UID: \"15fc61ae-3bf9-4d1b-80a6-19e9745988ee\") " Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.009929 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15fc61ae-3bf9-4d1b-80a6-19e9745988ee" (UID: "15fc61ae-3bf9-4d1b-80a6-19e9745988ee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.010192 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15fc61ae-3bf9-4d1b-80a6-19e9745988ee" (UID: "15fc61ae-3bf9-4d1b-80a6-19e9745988ee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.016090 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.016182 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-kube-api-access-h6t7l" (OuterVolumeSpecName: "kube-api-access-h6t7l") pod "15fc61ae-3bf9-4d1b-80a6-19e9745988ee" (UID: "15fc61ae-3bf9-4d1b-80a6-19e9745988ee"). InnerVolumeSpecName "kube-api-access-h6t7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.022715 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-scripts" (OuterVolumeSpecName: "scripts") pod "15fc61ae-3bf9-4d1b-80a6-19e9745988ee" (UID: "15fc61ae-3bf9-4d1b-80a6-19e9745988ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.043098 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15fc61ae-3bf9-4d1b-80a6-19e9745988ee" (UID: "15fc61ae-3bf9-4d1b-80a6-19e9745988ee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.107868 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15fc61ae-3bf9-4d1b-80a6-19e9745988ee" (UID: "15fc61ae-3bf9-4d1b-80a6-19e9745988ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.109819 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.109847 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.109858 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.109869 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6t7l\" (UniqueName: \"kubernetes.io/projected/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-kube-api-access-h6t7l\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.109881 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.109891 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.111186 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-config-data" (OuterVolumeSpecName: "config-data") pod "15fc61ae-3bf9-4d1b-80a6-19e9745988ee" (UID: "15fc61ae-3bf9-4d1b-80a6-19e9745988ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.211432 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fc61ae-3bf9-4d1b-80a6-19e9745988ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.830579 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.868720 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.884717 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.895811 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:31 crc kubenswrapper[4688]: E1125 12:33:31.896318 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerName="init" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896341 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerName="init" Nov 25 12:33:31 crc kubenswrapper[4688]: E1125 12:33:31.896365 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="proxy-httpd" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896372 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="proxy-httpd" Nov 25 12:33:31 crc kubenswrapper[4688]: E1125 12:33:31.896383 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerName="dnsmasq-dns" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896389 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerName="dnsmasq-dns" Nov 25 12:33:31 crc kubenswrapper[4688]: E1125 12:33:31.896400 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="ceilometer-notification-agent" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896408 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="ceilometer-notification-agent" Nov 25 12:33:31 crc kubenswrapper[4688]: E1125 12:33:31.896444 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="ceilometer-central-agent" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896469 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="ceilometer-central-agent" Nov 25 12:33:31 crc kubenswrapper[4688]: E1125 12:33:31.896480 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6212668-8e7b-4cd0-8e4c-c1de46191e97" containerName="heat-api" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896487 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6212668-8e7b-4cd0-8e4c-c1de46191e97" containerName="heat-api" Nov 25 12:33:31 crc kubenswrapper[4688]: E1125 12:33:31.896495 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="sg-core" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896501 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="sg-core" Nov 25 12:33:31 crc kubenswrapper[4688]: E1125 12:33:31.896515 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" containerName="heat-cfnapi" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896538 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" containerName="heat-cfnapi" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896741 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" containerName="heat-cfnapi" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896761 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="proxy-httpd" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896775 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="sg-core" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896787 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="ceilometer-central-agent" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896802 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" containerName="ceilometer-notification-agent" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896815 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc73c38-2fd7-464c-99a3-4fb5fea684c8" containerName="dnsmasq-dns" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.896827 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6212668-8e7b-4cd0-8e4c-c1de46191e97" containerName="heat-api" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.898699 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.910937 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.914249 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:33:31 crc kubenswrapper[4688]: I1125 12:33:31.914434 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.026857 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.026927 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-config-data\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.027187 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.027306 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-scripts\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.027385 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-run-httpd\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.027444 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-log-httpd\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.027490 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdjz\" (UniqueName: \"kubernetes.io/projected/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-kube-api-access-vxdjz\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.128867 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.128916 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-config-data\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.129003 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.129052 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-scripts\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.129099 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-run-httpd\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.129133 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-log-httpd\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.129156 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdjz\" (UniqueName: \"kubernetes.io/projected/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-kube-api-access-vxdjz\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.129825 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-run-httpd\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.129899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-log-httpd\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.132800 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.133003 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-config-data\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.133412 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-scripts\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.133951 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.148775 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdjz\" (UniqueName: \"kubernetes.io/projected/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-kube-api-access-vxdjz\") pod \"ceilometer-0\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.230510 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.702164 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.752074 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15fc61ae-3bf9-4d1b-80a6-19e9745988ee" path="/var/lib/kubelet/pods/15fc61ae-3bf9-4d1b-80a6-19e9745988ee/volumes" Nov 25 12:33:32 crc kubenswrapper[4688]: I1125 12:33:32.841004 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerStarted","Data":"57bf83ba0b37a8a27b2b9dce3462ee369e663d44766ecbaaeeb6e453d1311bbc"} Nov 25 12:33:33 crc kubenswrapper[4688]: I1125 12:33:33.853629 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerStarted","Data":"1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c"} Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.120924 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.121181 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerName="glance-log" containerID="cri-o://9b964cf6f5fc61d17820fef569897efba3de46a97d23f2ca4905b64dbf4884b7" gracePeriod=30 Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.122168 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerName="glance-httpd" containerID="cri-o://a3b0b44a43dbc5e51249385c60e70fb4e3c5df01e8a46971467cdb0d490a491a" gracePeriod=30 Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.250619 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hrqzl"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.252163 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.272167 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hrqzl"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.375196 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5lgln"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.379940 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.391836 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbff\" (UniqueName: \"kubernetes.io/projected/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-kube-api-access-zcbff\") pod \"nova-api-db-create-hrqzl\" (UID: \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\") " pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.392049 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-operator-scripts\") pod \"nova-api-db-create-hrqzl\" (UID: \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\") " pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.406357 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5lgln"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.487319 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-96cc-account-create-j8vlk"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.488568 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.492770 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.493591 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcbff\" (UniqueName: \"kubernetes.io/projected/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-kube-api-access-zcbff\") pod \"nova-api-db-create-hrqzl\" (UID: \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\") " pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.493712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616880c6-3a97-4e57-80ab-6ffc23eddb21-operator-scripts\") pod \"nova-cell0-db-create-5lgln\" (UID: \"616880c6-3a97-4e57-80ab-6ffc23eddb21\") " pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.493762 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-operator-scripts\") pod \"nova-api-db-create-hrqzl\" (UID: \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\") " pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.493809 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rkt\" (UniqueName: \"kubernetes.io/projected/616880c6-3a97-4e57-80ab-6ffc23eddb21-kube-api-access-q5rkt\") pod \"nova-cell0-db-create-5lgln\" (UID: \"616880c6-3a97-4e57-80ab-6ffc23eddb21\") " pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.494421 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-operator-scripts\") pod \"nova-api-db-create-hrqzl\" (UID: \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\") " pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.520243 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-96cc-account-create-j8vlk"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.522165 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcbff\" (UniqueName: \"kubernetes.io/projected/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-kube-api-access-zcbff\") pod \"nova-api-db-create-hrqzl\" (UID: \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\") " pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.573987 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.576347 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bclxq"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.577486 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.595317 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bclxq"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.596270 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5rkt\" (UniqueName: \"kubernetes.io/projected/616880c6-3a97-4e57-80ab-6ffc23eddb21-kube-api-access-q5rkt\") pod \"nova-cell0-db-create-5lgln\" (UID: \"616880c6-3a97-4e57-80ab-6ffc23eddb21\") " pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.596335 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d08010-f05e-4554-9b4c-7acbf51553c3-operator-scripts\") pod \"nova-api-96cc-account-create-j8vlk\" (UID: \"b1d08010-f05e-4554-9b4c-7acbf51553c3\") " pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.596421 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmmg7\" (UniqueName: \"kubernetes.io/projected/b1d08010-f05e-4554-9b4c-7acbf51553c3-kube-api-access-dmmg7\") pod \"nova-api-96cc-account-create-j8vlk\" (UID: \"b1d08010-f05e-4554-9b4c-7acbf51553c3\") " pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.596460 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616880c6-3a97-4e57-80ab-6ffc23eddb21-operator-scripts\") pod \"nova-cell0-db-create-5lgln\" (UID: \"616880c6-3a97-4e57-80ab-6ffc23eddb21\") " pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.597369 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616880c6-3a97-4e57-80ab-6ffc23eddb21-operator-scripts\") pod \"nova-cell0-db-create-5lgln\" (UID: \"616880c6-3a97-4e57-80ab-6ffc23eddb21\") " pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.633653 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5rkt\" (UniqueName: \"kubernetes.io/projected/616880c6-3a97-4e57-80ab-6ffc23eddb21-kube-api-access-q5rkt\") pod \"nova-cell0-db-create-5lgln\" (UID: \"616880c6-3a97-4e57-80ab-6ffc23eddb21\") " pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.688740 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0d93-account-create-7d8vt"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.689853 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.699208 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.700312 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx6m9\" (UniqueName: \"kubernetes.io/projected/c98f2c01-29ed-4744-8e2b-b62291565faf-kube-api-access-kx6m9\") pod \"nova-cell1-db-create-bclxq\" (UID: \"c98f2c01-29ed-4744-8e2b-b62291565faf\") " pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.700413 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d08010-f05e-4554-9b4c-7acbf51553c3-operator-scripts\") pod \"nova-api-96cc-account-create-j8vlk\" (UID: \"b1d08010-f05e-4554-9b4c-7acbf51553c3\") " pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.700477 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98f2c01-29ed-4744-8e2b-b62291565faf-operator-scripts\") pod \"nova-cell1-db-create-bclxq\" (UID: \"c98f2c01-29ed-4744-8e2b-b62291565faf\") " pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.700578 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmmg7\" (UniqueName: \"kubernetes.io/projected/b1d08010-f05e-4554-9b4c-7acbf51553c3-kube-api-access-dmmg7\") pod \"nova-api-96cc-account-create-j8vlk\" (UID: \"b1d08010-f05e-4554-9b4c-7acbf51553c3\") " pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.701559 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d08010-f05e-4554-9b4c-7acbf51553c3-operator-scripts\") pod \"nova-api-96cc-account-create-j8vlk\" (UID: \"b1d08010-f05e-4554-9b4c-7acbf51553c3\") " pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.725765 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0d93-account-create-7d8vt"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.729944 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmmg7\" (UniqueName: \"kubernetes.io/projected/b1d08010-f05e-4554-9b4c-7acbf51553c3-kube-api-access-dmmg7\") pod \"nova-api-96cc-account-create-j8vlk\" (UID: \"b1d08010-f05e-4554-9b4c-7acbf51553c3\") " pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.745630 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.804133 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hl55\" (UniqueName: \"kubernetes.io/projected/b7bb810b-ae07-4550-a2b1-4eba500f63ad-kube-api-access-5hl55\") pod \"nova-cell0-0d93-account-create-7d8vt\" (UID: \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\") " pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.804443 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7bb810b-ae07-4550-a2b1-4eba500f63ad-operator-scripts\") pod \"nova-cell0-0d93-account-create-7d8vt\" (UID: \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\") " pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.804650 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx6m9\" (UniqueName: \"kubernetes.io/projected/c98f2c01-29ed-4744-8e2b-b62291565faf-kube-api-access-kx6m9\") pod \"nova-cell1-db-create-bclxq\" (UID: \"c98f2c01-29ed-4744-8e2b-b62291565faf\") " pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.804828 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98f2c01-29ed-4744-8e2b-b62291565faf-operator-scripts\") pod \"nova-cell1-db-create-bclxq\" (UID: \"c98f2c01-29ed-4744-8e2b-b62291565faf\") " pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.806449 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98f2c01-29ed-4744-8e2b-b62291565faf-operator-scripts\") pod \"nova-cell1-db-create-bclxq\" (UID: \"c98f2c01-29ed-4744-8e2b-b62291565faf\") " pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.810975 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.824241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx6m9\" (UniqueName: \"kubernetes.io/projected/c98f2c01-29ed-4744-8e2b-b62291565faf-kube-api-access-kx6m9\") pod \"nova-cell1-db-create-bclxq\" (UID: \"c98f2c01-29ed-4744-8e2b-b62291565faf\") " pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.880449 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-43e9-account-create-pzpw6"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.882219 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.886350 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.893638 4688 generic.go:334] "Generic (PLEG): container finished" podID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerID="9b964cf6f5fc61d17820fef569897efba3de46a97d23f2ca4905b64dbf4884b7" exitCode=143 Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.894180 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b276d21b-cfa5-4b99-98f4-c75e85233b0c","Type":"ContainerDied","Data":"9b964cf6f5fc61d17820fef569897efba3de46a97d23f2ca4905b64dbf4884b7"} Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.906452 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hl55\" (UniqueName: \"kubernetes.io/projected/b7bb810b-ae07-4550-a2b1-4eba500f63ad-kube-api-access-5hl55\") pod \"nova-cell0-0d93-account-create-7d8vt\" (UID: \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\") " pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.906502 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7bb810b-ae07-4550-a2b1-4eba500f63ad-operator-scripts\") pod \"nova-cell0-0d93-account-create-7d8vt\" (UID: \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\") " pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.907535 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7bb810b-ae07-4550-a2b1-4eba500f63ad-operator-scripts\") pod \"nova-cell0-0d93-account-create-7d8vt\" (UID: \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\") " pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.910912 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerStarted","Data":"8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233"} Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.920610 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-43e9-account-create-pzpw6"] Nov 25 12:33:34 crc kubenswrapper[4688]: I1125 12:33:34.930392 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hl55\" (UniqueName: \"kubernetes.io/projected/b7bb810b-ae07-4550-a2b1-4eba500f63ad-kube-api-access-5hl55\") pod \"nova-cell0-0d93-account-create-7d8vt\" (UID: \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\") " pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.015065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-964pg\" (UniqueName: \"kubernetes.io/projected/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-kube-api-access-964pg\") pod \"nova-cell1-43e9-account-create-pzpw6\" (UID: \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\") " pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.015166 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-operator-scripts\") pod \"nova-cell1-43e9-account-create-pzpw6\" (UID: \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\") " pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.116645 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-operator-scripts\") pod \"nova-cell1-43e9-account-create-pzpw6\" (UID: \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\") " pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.116849 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-964pg\" (UniqueName: \"kubernetes.io/projected/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-kube-api-access-964pg\") pod \"nova-cell1-43e9-account-create-pzpw6\" (UID: \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\") " pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.117506 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.117601 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-operator-scripts\") pod \"nova-cell1-43e9-account-create-pzpw6\" (UID: \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\") " pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.128003 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.141068 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-964pg\" (UniqueName: \"kubernetes.io/projected/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-kube-api-access-964pg\") pod \"nova-cell1-43e9-account-create-pzpw6\" (UID: \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\") " pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.211136 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hrqzl"] Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.211622 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.371769 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5lgln"] Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.497173 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-96cc-account-create-j8vlk"] Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.563931 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.564218 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerName="glance-log" containerID="cri-o://1ce3730692da953529a011ec8e45e6f64ac03aa179f7bf9e77ed09d1cfacb979" gracePeriod=30 Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.564374 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerName="glance-httpd" containerID="cri-o://faa2eb11f3733696f3fc304f2b05d9101fe6abc330c685ea311d364f9d3bdbd9" gracePeriod=30 Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.717125 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0d93-account-create-7d8vt"] Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.748039 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bclxq"] Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.942702 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerStarted","Data":"dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.944342 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0d93-account-create-7d8vt" event={"ID":"b7bb810b-ae07-4550-a2b1-4eba500f63ad","Type":"ContainerStarted","Data":"639b871c1c02c52b88219db1776b86f48acae0a047d852848d1a835ec43ccbd5"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.949109 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5lgln" event={"ID":"616880c6-3a97-4e57-80ab-6ffc23eddb21","Type":"ContainerStarted","Data":"e2b7a68fae6cbf42304ad003ba486647b80e904684be1b30af805e8b93a0a7d4"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.949757 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5lgln" event={"ID":"616880c6-3a97-4e57-80ab-6ffc23eddb21","Type":"ContainerStarted","Data":"943765b010e2d8963f9ce43f96707a52f1c4adb5e6bc8e16c44583fa0a8f18c6"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.951566 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bclxq" event={"ID":"c98f2c01-29ed-4744-8e2b-b62291565faf","Type":"ContainerStarted","Data":"050802e81ec88617dc11aee538c52685c6c608c05fddea1e40b951f46c5afb33"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.955207 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hrqzl" event={"ID":"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f","Type":"ContainerStarted","Data":"0ed8192feb617d2d347dbb63dcb0ca175825d27b888224c14777467f70808d56"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.955271 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hrqzl" event={"ID":"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f","Type":"ContainerStarted","Data":"18b0a90fa1aef86a437264b153f7ae02f0085599dfa8b3c50dd966fc52de189b"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.971129 4688 generic.go:334] "Generic (PLEG): container finished" podID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerID="1ce3730692da953529a011ec8e45e6f64ac03aa179f7bf9e77ed09d1cfacb979" exitCode=143 Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.971241 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e00027ea-a0b8-4406-bb9f-5583cbec970f","Type":"ContainerDied","Data":"1ce3730692da953529a011ec8e45e6f64ac03aa179f7bf9e77ed09d1cfacb979"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.979909 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-96cc-account-create-j8vlk" event={"ID":"b1d08010-f05e-4554-9b4c-7acbf51553c3","Type":"ContainerStarted","Data":"d8f003693fc5107a4f1048e07fa1d42aa1a08cfca8b0f3c51289577f07f60d6c"} Nov 25 12:33:35 crc kubenswrapper[4688]: I1125 12:33:35.990019 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-5lgln" podStartSLOduration=1.9899989630000001 podStartE2EDuration="1.989998963s" podCreationTimestamp="2025-11-25 12:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:35.968024523 +0000 UTC m=+1166.077653391" watchObservedRunningTime="2025-11-25 12:33:35.989998963 +0000 UTC m=+1166.099627831" Nov 25 12:33:36 crc kubenswrapper[4688]: I1125 12:33:36.002878 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-hrqzl" podStartSLOduration=2.002859548 podStartE2EDuration="2.002859548s" podCreationTimestamp="2025-11-25 12:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:35.983011685 +0000 UTC m=+1166.092640553" watchObservedRunningTime="2025-11-25 12:33:36.002859548 +0000 UTC m=+1166.112488416" Nov 25 12:33:36 crc kubenswrapper[4688]: I1125 12:33:36.013705 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-96cc-account-create-j8vlk" podStartSLOduration=2.013687768 podStartE2EDuration="2.013687768s" podCreationTimestamp="2025-11-25 12:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:35.998167252 +0000 UTC m=+1166.107796120" watchObservedRunningTime="2025-11-25 12:33:36.013687768 +0000 UTC m=+1166.123316636" Nov 25 12:33:36 crc kubenswrapper[4688]: I1125 12:33:36.050825 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-43e9-account-create-pzpw6"] Nov 25 12:33:36 crc kubenswrapper[4688]: W1125 12:33:36.060716 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dfbc1c4_4a11_470d_8fa3_d3b438268fce.slice/crio-3b9f3a477e110217c8c93e796726056a7bfb926bfb9c8d22a5362e744a8623df WatchSource:0}: Error finding container 3b9f3a477e110217c8c93e796726056a7bfb926bfb9c8d22a5362e744a8623df: Status 404 returned error can't find the container with id 3b9f3a477e110217c8c93e796726056a7bfb926bfb9c8d22a5362e744a8623df Nov 25 12:33:36 crc kubenswrapper[4688]: I1125 12:33:36.332739 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7b778675bb-bfk2b" podUID="e75ca9ca-8146-4469-bab4-8db0e4735f0e" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.172:8000/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 12:33:36 crc kubenswrapper[4688]: I1125 12:33:36.480796 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:33:36 crc kubenswrapper[4688]: I1125 12:33:36.997762 4688 generic.go:334] "Generic (PLEG): container finished" podID="c98f2c01-29ed-4744-8e2b-b62291565faf" containerID="e1ca3f7b7267f335b219bfc1c8037cc32dce896e764c9bc47a849cf40e31288e" exitCode=0 Nov 25 12:33:36 crc kubenswrapper[4688]: I1125 12:33:36.998086 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bclxq" event={"ID":"c98f2c01-29ed-4744-8e2b-b62291565faf","Type":"ContainerDied","Data":"e1ca3f7b7267f335b219bfc1c8037cc32dce896e764c9bc47a849cf40e31288e"} Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.002090 4688 generic.go:334] "Generic (PLEG): container finished" podID="5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f" containerID="0ed8192feb617d2d347dbb63dcb0ca175825d27b888224c14777467f70808d56" exitCode=0 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.002147 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hrqzl" event={"ID":"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f","Type":"ContainerDied","Data":"0ed8192feb617d2d347dbb63dcb0ca175825d27b888224c14777467f70808d56"} Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.006477 4688 generic.go:334] "Generic (PLEG): container finished" podID="b1d08010-f05e-4554-9b4c-7acbf51553c3" containerID="e1ff6fac8b2b13e667d472ab5af5fd20d26bcfd86ba9c910d5d818c96b72f747" exitCode=0 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.006674 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-96cc-account-create-j8vlk" event={"ID":"b1d08010-f05e-4554-9b4c-7acbf51553c3","Type":"ContainerDied","Data":"e1ff6fac8b2b13e667d472ab5af5fd20d26bcfd86ba9c910d5d818c96b72f747"} Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.009941 4688 generic.go:334] "Generic (PLEG): container finished" podID="5dfbc1c4-4a11-470d-8fa3-d3b438268fce" containerID="1e2cd5746d5093bf6ca86dceaf489b21505180932e19318b5d6a385fd0289f23" exitCode=0 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.009993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43e9-account-create-pzpw6" event={"ID":"5dfbc1c4-4a11-470d-8fa3-d3b438268fce","Type":"ContainerDied","Data":"1e2cd5746d5093bf6ca86dceaf489b21505180932e19318b5d6a385fd0289f23"} Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.010016 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43e9-account-create-pzpw6" event={"ID":"5dfbc1c4-4a11-470d-8fa3-d3b438268fce","Type":"ContainerStarted","Data":"3b9f3a477e110217c8c93e796726056a7bfb926bfb9c8d22a5362e744a8623df"} Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.018736 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="ceilometer-central-agent" containerID="cri-o://1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c" gracePeriod=30 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.018783 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.018850 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="sg-core" containerID="cri-o://dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44" gracePeriod=30 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.018865 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="ceilometer-notification-agent" containerID="cri-o://8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233" gracePeriod=30 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.018929 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="proxy-httpd" containerID="cri-o://5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f" gracePeriod=30 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.043327 4688 generic.go:334] "Generic (PLEG): container finished" podID="b7bb810b-ae07-4550-a2b1-4eba500f63ad" containerID="41de2305dfae4a0598e00f6ed165d47417ca6038cd1e6ed8a962109f476518df" exitCode=0 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.043408 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0d93-account-create-7d8vt" event={"ID":"b7bb810b-ae07-4550-a2b1-4eba500f63ad","Type":"ContainerDied","Data":"41de2305dfae4a0598e00f6ed165d47417ca6038cd1e6ed8a962109f476518df"} Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.050590 4688 generic.go:334] "Generic (PLEG): container finished" podID="616880c6-3a97-4e57-80ab-6ffc23eddb21" containerID="e2b7a68fae6cbf42304ad003ba486647b80e904684be1b30af805e8b93a0a7d4" exitCode=0 Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.050754 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5lgln" event={"ID":"616880c6-3a97-4e57-80ab-6ffc23eddb21","Type":"ContainerDied","Data":"e2b7a68fae6cbf42304ad003ba486647b80e904684be1b30af805e8b93a0a7d4"} Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.094879 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.43404811 podStartE2EDuration="6.09486202s" podCreationTimestamp="2025-11-25 12:33:31 +0000 UTC" firstStartedPulling="2025-11-25 12:33:32.708077969 +0000 UTC m=+1162.817706837" lastFinishedPulling="2025-11-25 12:33:36.368891879 +0000 UTC m=+1166.478520747" observedRunningTime="2025-11-25 12:33:37.092482196 +0000 UTC m=+1167.202111074" watchObservedRunningTime="2025-11-25 12:33:37.09486202 +0000 UTC m=+1167.204490888" Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.516499 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7f444b957c-4fqdt" Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.599141 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5c9cbfcdb-bk4g8"] Nov 25 12:33:37 crc kubenswrapper[4688]: I1125 12:33:37.896387 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7d45cc658d-s47zc" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.001784 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78f5f9ff74-htn8s"] Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.088004 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerID="dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44" exitCode=2 Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.088335 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerID="8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233" exitCode=0 Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.088384 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerStarted","Data":"5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f"} Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.088409 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerDied","Data":"dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44"} Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.088422 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerDied","Data":"8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233"} Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.133732 4688 generic.go:334] "Generic (PLEG): container finished" podID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerID="a3b0b44a43dbc5e51249385c60e70fb4e3c5df01e8a46971467cdb0d490a491a" exitCode=0 Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.133792 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b276d21b-cfa5-4b99-98f4-c75e85233b0c","Type":"ContainerDied","Data":"a3b0b44a43dbc5e51249385c60e70fb4e3c5df01e8a46971467cdb0d490a491a"} Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.193773 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-ff9c99746-zhh6h" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.273011 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.281209 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6885f88968-shb6s"] Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.281600 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6885f88968-shb6s" podUID="de20cc3c-3de9-4e8a-97ba-203205cbb278" containerName="heat-engine" containerID="cri-o://7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" gracePeriod=60 Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.326142 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.346661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-combined-ca-bundle\") pod \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.346744 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-scripts\") pod \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.346784 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-logs\") pod \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.346913 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-public-tls-certs\") pod \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.346948 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55fvf\" (UniqueName: \"kubernetes.io/projected/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-kube-api-access-55fvf\") pod \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.346994 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-config-data\") pod \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.347026 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data\") pod \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.347048 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7299\" (UniqueName: \"kubernetes.io/projected/b276d21b-cfa5-4b99-98f4-c75e85233b0c-kube-api-access-s7299\") pod \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.347066 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.347121 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data-custom\") pod \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.347233 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-combined-ca-bundle\") pod \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\" (UID: \"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.347268 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-httpd-run\") pod \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\" (UID: \"b276d21b-cfa5-4b99-98f4-c75e85233b0c\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.348840 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b276d21b-cfa5-4b99-98f4-c75e85233b0c" (UID: "b276d21b-cfa5-4b99-98f4-c75e85233b0c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.351129 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-logs" (OuterVolumeSpecName: "logs") pod "b276d21b-cfa5-4b99-98f4-c75e85233b0c" (UID: "b276d21b-cfa5-4b99-98f4-c75e85233b0c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.374819 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-scripts" (OuterVolumeSpecName: "scripts") pod "b276d21b-cfa5-4b99-98f4-c75e85233b0c" (UID: "b276d21b-cfa5-4b99-98f4-c75e85233b0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.374859 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "b276d21b-cfa5-4b99-98f4-c75e85233b0c" (UID: "b276d21b-cfa5-4b99-98f4-c75e85233b0c"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.382610 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" (UID: "815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.384038 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b276d21b-cfa5-4b99-98f4-c75e85233b0c-kube-api-access-s7299" (OuterVolumeSpecName: "kube-api-access-s7299") pod "b276d21b-cfa5-4b99-98f4-c75e85233b0c" (UID: "b276d21b-cfa5-4b99-98f4-c75e85233b0c"). InnerVolumeSpecName "kube-api-access-s7299". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.384684 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-kube-api-access-55fvf" (OuterVolumeSpecName: "kube-api-access-55fvf") pod "815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" (UID: "815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b"). InnerVolumeSpecName "kube-api-access-55fvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.414735 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b276d21b-cfa5-4b99-98f4-c75e85233b0c" (UID: "b276d21b-cfa5-4b99-98f4-c75e85233b0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.416159 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" (UID: "815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449342 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449368 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449379 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449387 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449395 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449403 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449412 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b276d21b-cfa5-4b99-98f4-c75e85233b0c-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449420 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55fvf\" (UniqueName: \"kubernetes.io/projected/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-kube-api-access-55fvf\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.449428 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7299\" (UniqueName: \"kubernetes.io/projected/b276d21b-cfa5-4b99-98f4-c75e85233b0c-kube-api-access-s7299\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.469209 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b276d21b-cfa5-4b99-98f4-c75e85233b0c" (UID: "b276d21b-cfa5-4b99-98f4-c75e85233b0c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.488836 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.507697 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data" (OuterVolumeSpecName: "config-data") pod "815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" (UID: "815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: E1125 12:33:38.513360 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2052cd2_0348_4c12_ac00_b221c4fa8dcc.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.551461 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.551514 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.551537 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.553957 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-config-data" (OuterVolumeSpecName: "config-data") pod "b276d21b-cfa5-4b99-98f4-c75e85233b0c" (UID: "b276d21b-cfa5-4b99-98f4-c75e85233b0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.653669 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b276d21b-cfa5-4b99-98f4-c75e85233b0c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.796682 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.860452 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data\") pod \"55e36d00-8496-4835-a57c-2cae27092645\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.860581 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpgfz\" (UniqueName: \"kubernetes.io/projected/55e36d00-8496-4835-a57c-2cae27092645-kube-api-access-xpgfz\") pod \"55e36d00-8496-4835-a57c-2cae27092645\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.860769 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data-custom\") pod \"55e36d00-8496-4835-a57c-2cae27092645\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.860808 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-combined-ca-bundle\") pod \"55e36d00-8496-4835-a57c-2cae27092645\" (UID: \"55e36d00-8496-4835-a57c-2cae27092645\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.881637 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "55e36d00-8496-4835-a57c-2cae27092645" (UID: "55e36d00-8496-4835-a57c-2cae27092645"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.884509 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e36d00-8496-4835-a57c-2cae27092645-kube-api-access-xpgfz" (OuterVolumeSpecName: "kube-api-access-xpgfz") pod "55e36d00-8496-4835-a57c-2cae27092645" (UID: "55e36d00-8496-4835-a57c-2cae27092645"). InnerVolumeSpecName "kube-api-access-xpgfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.920154 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.949463 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55e36d00-8496-4835-a57c-2cae27092645" (UID: "55e36d00-8496-4835-a57c-2cae27092645"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.959853 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.972622 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5rkt\" (UniqueName: \"kubernetes.io/projected/616880c6-3a97-4e57-80ab-6ffc23eddb21-kube-api-access-q5rkt\") pod \"616880c6-3a97-4e57-80ab-6ffc23eddb21\" (UID: \"616880c6-3a97-4e57-80ab-6ffc23eddb21\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.972814 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616880c6-3a97-4e57-80ab-6ffc23eddb21-operator-scripts\") pod \"616880c6-3a97-4e57-80ab-6ffc23eddb21\" (UID: \"616880c6-3a97-4e57-80ab-6ffc23eddb21\") " Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.973280 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.973298 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.973310 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpgfz\" (UniqueName: \"kubernetes.io/projected/55e36d00-8496-4835-a57c-2cae27092645-kube-api-access-xpgfz\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.974134 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616880c6-3a97-4e57-80ab-6ffc23eddb21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "616880c6-3a97-4e57-80ab-6ffc23eddb21" (UID: "616880c6-3a97-4e57-80ab-6ffc23eddb21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.983397 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616880c6-3a97-4e57-80ab-6ffc23eddb21-kube-api-access-q5rkt" (OuterVolumeSpecName: "kube-api-access-q5rkt") pod "616880c6-3a97-4e57-80ab-6ffc23eddb21" (UID: "616880c6-3a97-4e57-80ab-6ffc23eddb21"). InnerVolumeSpecName "kube-api-access-q5rkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.983796 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:38 crc kubenswrapper[4688]: I1125 12:33:38.989674 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data" (OuterVolumeSpecName: "config-data") pod "55e36d00-8496-4835-a57c-2cae27092645" (UID: "55e36d00-8496-4835-a57c-2cae27092645"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.002985 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.014286 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074407 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-operator-scripts\") pod \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\" (UID: \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074447 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hl55\" (UniqueName: \"kubernetes.io/projected/b7bb810b-ae07-4550-a2b1-4eba500f63ad-kube-api-access-5hl55\") pod \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\" (UID: \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074480 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98f2c01-29ed-4744-8e2b-b62291565faf-operator-scripts\") pod \"c98f2c01-29ed-4744-8e2b-b62291565faf\" (UID: \"c98f2c01-29ed-4744-8e2b-b62291565faf\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074635 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7bb810b-ae07-4550-a2b1-4eba500f63ad-operator-scripts\") pod \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\" (UID: \"b7bb810b-ae07-4550-a2b1-4eba500f63ad\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074685 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d08010-f05e-4554-9b4c-7acbf51553c3-operator-scripts\") pod \"b1d08010-f05e-4554-9b4c-7acbf51553c3\" (UID: \"b1d08010-f05e-4554-9b4c-7acbf51553c3\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074729 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx6m9\" (UniqueName: \"kubernetes.io/projected/c98f2c01-29ed-4744-8e2b-b62291565faf-kube-api-access-kx6m9\") pod \"c98f2c01-29ed-4744-8e2b-b62291565faf\" (UID: \"c98f2c01-29ed-4744-8e2b-b62291565faf\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074756 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcbff\" (UniqueName: \"kubernetes.io/projected/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-kube-api-access-zcbff\") pod \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\" (UID: \"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074791 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmmg7\" (UniqueName: \"kubernetes.io/projected/b1d08010-f05e-4554-9b4c-7acbf51553c3-kube-api-access-dmmg7\") pod \"b1d08010-f05e-4554-9b4c-7acbf51553c3\" (UID: \"b1d08010-f05e-4554-9b4c-7acbf51553c3\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.074904 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f" (UID: "5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.075284 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5rkt\" (UniqueName: \"kubernetes.io/projected/616880c6-3a97-4e57-80ab-6ffc23eddb21-kube-api-access-q5rkt\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.075310 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55e36d00-8496-4835-a57c-2cae27092645-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.075317 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7bb810b-ae07-4550-a2b1-4eba500f63ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7bb810b-ae07-4550-a2b1-4eba500f63ad" (UID: "b7bb810b-ae07-4550-a2b1-4eba500f63ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.075323 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616880c6-3a97-4e57-80ab-6ffc23eddb21-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.075360 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.080214 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.080806 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7bb810b-ae07-4550-a2b1-4eba500f63ad-kube-api-access-5hl55" (OuterVolumeSpecName: "kube-api-access-5hl55") pod "b7bb810b-ae07-4550-a2b1-4eba500f63ad" (UID: "b7bb810b-ae07-4550-a2b1-4eba500f63ad"). InnerVolumeSpecName "kube-api-access-5hl55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.081243 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c98f2c01-29ed-4744-8e2b-b62291565faf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c98f2c01-29ed-4744-8e2b-b62291565faf" (UID: "c98f2c01-29ed-4744-8e2b-b62291565faf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.081755 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d08010-f05e-4554-9b4c-7acbf51553c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1d08010-f05e-4554-9b4c-7acbf51553c3" (UID: "b1d08010-f05e-4554-9b4c-7acbf51553c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.082156 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d08010-f05e-4554-9b4c-7acbf51553c3-kube-api-access-dmmg7" (OuterVolumeSpecName: "kube-api-access-dmmg7") pod "b1d08010-f05e-4554-9b4c-7acbf51553c3" (UID: "b1d08010-f05e-4554-9b4c-7acbf51553c3"). InnerVolumeSpecName "kube-api-access-dmmg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.084151 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-kube-api-access-zcbff" (OuterVolumeSpecName: "kube-api-access-zcbff") pod "5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f" (UID: "5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f"). InnerVolumeSpecName "kube-api-access-zcbff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.089546 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98f2c01-29ed-4744-8e2b-b62291565faf-kube-api-access-kx6m9" (OuterVolumeSpecName: "kube-api-access-kx6m9") pod "c98f2c01-29ed-4744-8e2b-b62291565faf" (UID: "c98f2c01-29ed-4744-8e2b-b62291565faf"). InnerVolumeSpecName "kube-api-access-kx6m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.177051 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-964pg\" (UniqueName: \"kubernetes.io/projected/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-kube-api-access-964pg\") pod \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\" (UID: \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.177227 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hrqzl" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.177250 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-operator-scripts\") pod \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\" (UID: \"5dfbc1c4-4a11-470d-8fa3-d3b438268fce\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.177350 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hrqzl" event={"ID":"5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f","Type":"ContainerDied","Data":"18b0a90fa1aef86a437264b153f7ae02f0085599dfa8b3c50dd966fc52de189b"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.177391 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18b0a90fa1aef86a437264b153f7ae02f0085599dfa8b3c50dd966fc52de189b" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.178001 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5dfbc1c4-4a11-470d-8fa3-d3b438268fce" (UID: "5dfbc1c4-4a11-470d-8fa3-d3b438268fce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.180835 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.180868 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hl55\" (UniqueName: \"kubernetes.io/projected/b7bb810b-ae07-4550-a2b1-4eba500f63ad-kube-api-access-5hl55\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.180883 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98f2c01-29ed-4744-8e2b-b62291565faf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.180904 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7bb810b-ae07-4550-a2b1-4eba500f63ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.180915 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d08010-f05e-4554-9b4c-7acbf51553c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.180930 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx6m9\" (UniqueName: \"kubernetes.io/projected/c98f2c01-29ed-4744-8e2b-b62291565faf-kube-api-access-kx6m9\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.180941 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcbff\" (UniqueName: \"kubernetes.io/projected/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f-kube-api-access-zcbff\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.180958 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmmg7\" (UniqueName: \"kubernetes.io/projected/b1d08010-f05e-4554-9b4c-7acbf51553c3-kube-api-access-dmmg7\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.182020 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-kube-api-access-964pg" (OuterVolumeSpecName: "kube-api-access-964pg") pod "5dfbc1c4-4a11-470d-8fa3-d3b438268fce" (UID: "5dfbc1c4-4a11-470d-8fa3-d3b438268fce"). InnerVolumeSpecName "kube-api-access-964pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.206100 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-96cc-account-create-j8vlk" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.206387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-96cc-account-create-j8vlk" event={"ID":"b1d08010-f05e-4554-9b4c-7acbf51553c3","Type":"ContainerDied","Data":"d8f003693fc5107a4f1048e07fa1d42aa1a08cfca8b0f3c51289577f07f60d6c"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.206464 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f003693fc5107a4f1048e07fa1d42aa1a08cfca8b0f3c51289577f07f60d6c" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.216685 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43e9-account-create-pzpw6" event={"ID":"5dfbc1c4-4a11-470d-8fa3-d3b438268fce","Type":"ContainerDied","Data":"3b9f3a477e110217c8c93e796726056a7bfb926bfb9c8d22a5362e744a8623df"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.224587 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b9f3a477e110217c8c93e796726056a7bfb926bfb9c8d22a5362e744a8623df" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.224613 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5lgln" event={"ID":"616880c6-3a97-4e57-80ab-6ffc23eddb21","Type":"ContainerDied","Data":"943765b010e2d8963f9ce43f96707a52f1c4adb5e6bc8e16c44583fa0a8f18c6"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.224632 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="943765b010e2d8963f9ce43f96707a52f1c4adb5e6bc8e16c44583fa0a8f18c6" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.223990 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5lgln" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.216802 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43e9-account-create-pzpw6" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.258654 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b276d21b-cfa5-4b99-98f4-c75e85233b0c","Type":"ContainerDied","Data":"6f6f01a6e7e4d37eac70e81564bf68a909d1ab334e7cf539b8b40f38d1d194e8"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.258715 4688 scope.go:117] "RemoveContainer" containerID="a3b0b44a43dbc5e51249385c60e70fb4e3c5df01e8a46971467cdb0d490a491a" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.258877 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.284279 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-964pg\" (UniqueName: \"kubernetes.io/projected/5dfbc1c4-4a11-470d-8fa3-d3b438268fce-kube-api-access-964pg\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.286829 4688 generic.go:334] "Generic (PLEG): container finished" podID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerID="faa2eb11f3733696f3fc304f2b05d9101fe6abc330c685ea311d364f9d3bdbd9" exitCode=0 Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.286945 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e00027ea-a0b8-4406-bb9f-5583cbec970f","Type":"ContainerDied","Data":"faa2eb11f3733696f3fc304f2b05d9101fe6abc330c685ea311d364f9d3bdbd9"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.304147 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f5f9ff74-htn8s" event={"ID":"55e36d00-8496-4835-a57c-2cae27092645","Type":"ContainerDied","Data":"e7b96ecf6ccda463a06be941c9bff9aeb9173bd24ad65b2965f4aa8a55eb5c3c"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.304303 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f5f9ff74-htn8s" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.315828 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" event={"ID":"815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b","Type":"ContainerDied","Data":"a31ae7ca324d2275961d7ca7b8c02d00250ae7daab6f5ac050cb991bc6285950"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.315931 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c9cbfcdb-bk4g8" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.320440 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.331435 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0d93-account-create-7d8vt" event={"ID":"b7bb810b-ae07-4550-a2b1-4eba500f63ad","Type":"ContainerDied","Data":"639b871c1c02c52b88219db1776b86f48acae0a047d852848d1a835ec43ccbd5"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.331479 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639b871c1c02c52b88219db1776b86f48acae0a047d852848d1a835ec43ccbd5" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.331580 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0d93-account-create-7d8vt" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.332719 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.337595 4688 scope.go:117] "RemoveContainer" containerID="9b964cf6f5fc61d17820fef569897efba3de46a97d23f2ca4905b64dbf4884b7" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.338886 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bclxq" event={"ID":"c98f2c01-29ed-4744-8e2b-b62291565faf","Type":"ContainerDied","Data":"050802e81ec88617dc11aee538c52685c6c608c05fddea1e40b951f46c5afb33"} Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.338921 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="050802e81ec88617dc11aee538c52685c6c608c05fddea1e40b951f46c5afb33" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.338993 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bclxq" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.343285 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345276 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" containerName="heat-cfnapi" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345323 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" containerName="heat-cfnapi" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345375 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerName="glance-log" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345385 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerName="glance-log" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345407 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345415 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345478 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d08010-f05e-4554-9b4c-7acbf51553c3" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345490 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d08010-f05e-4554-9b4c-7acbf51553c3" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345499 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98f2c01-29ed-4744-8e2b-b62291565faf" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345508 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98f2c01-29ed-4744-8e2b-b62291565faf" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345587 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e36d00-8496-4835-a57c-2cae27092645" containerName="heat-api" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345665 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e36d00-8496-4835-a57c-2cae27092645" containerName="heat-api" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345696 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616880c6-3a97-4e57-80ab-6ffc23eddb21" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345705 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="616880c6-3a97-4e57-80ab-6ffc23eddb21" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345715 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e36d00-8496-4835-a57c-2cae27092645" containerName="heat-api" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345747 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e36d00-8496-4835-a57c-2cae27092645" containerName="heat-api" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345765 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerName="glance-httpd" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345772 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerName="glance-httpd" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345785 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfbc1c4-4a11-470d-8fa3-d3b438268fce" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345791 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfbc1c4-4a11-470d-8fa3-d3b438268fce" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.345829 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7bb810b-ae07-4550-a2b1-4eba500f63ad" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.345838 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7bb810b-ae07-4550-a2b1-4eba500f63ad" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.346190 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7bb810b-ae07-4550-a2b1-4eba500f63ad" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.346257 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d08010-f05e-4554-9b4c-7acbf51553c3" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.346269 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e36d00-8496-4835-a57c-2cae27092645" containerName="heat-api" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.346335 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.346358 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfbc1c4-4a11-470d-8fa3-d3b438268fce" containerName="mariadb-account-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.346419 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="616880c6-3a97-4e57-80ab-6ffc23eddb21" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.346465 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98f2c01-29ed-4744-8e2b-b62291565faf" containerName="mariadb-database-create" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.347229 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" containerName="heat-cfnapi" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.349389 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerName="glance-log" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.349477 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" containerName="glance-httpd" Nov 25 12:33:39 crc kubenswrapper[4688]: E1125 12:33:39.353962 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" containerName="heat-cfnapi" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.354007 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" containerName="heat-cfnapi" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.359653 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e36d00-8496-4835-a57c-2cae27092645" containerName="heat-api" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.359738 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" containerName="heat-cfnapi" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.360628 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.367971 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.368333 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.388198 4688 scope.go:117] "RemoveContainer" containerID="8797f20742e92bfed2d0fbba6930da675732bef989cbde4384bbca9fd531e815" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.398599 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.411571 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5c9cbfcdb-bk4g8"] Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.425315 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5c9cbfcdb-bk4g8"] Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.436922 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78f5f9ff74-htn8s"] Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.448342 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78f5f9ff74-htn8s"] Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.449791 4688 scope.go:117] "RemoveContainer" containerID="6e7f89dc5e5a3cc333adc947b2aa8cd02b7e6adc4d027f70defefff584ced38b" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.492409 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.492532 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdm5m\" (UniqueName: \"kubernetes.io/projected/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-kube-api-access-gdm5m\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.492562 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-logs\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.492623 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.492681 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.492775 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.492827 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.493045 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.604120 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.604224 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdm5m\" (UniqueName: \"kubernetes.io/projected/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-kube-api-access-gdm5m\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.604246 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-logs\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.604289 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.604312 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.604368 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.604390 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.604477 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.605848 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.606319 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.612349 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-logs\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.612939 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.632814 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.636646 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdm5m\" (UniqueName: \"kubernetes.io/projected/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-kube-api-access-gdm5m\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.639814 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.646765 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.651037 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c\") " pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.724356 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.767944 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.910777 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-config-data\") pod \"e00027ea-a0b8-4406-bb9f-5583cbec970f\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.910824 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"e00027ea-a0b8-4406-bb9f-5583cbec970f\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.910895 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-internal-tls-certs\") pod \"e00027ea-a0b8-4406-bb9f-5583cbec970f\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.910933 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-combined-ca-bundle\") pod \"e00027ea-a0b8-4406-bb9f-5583cbec970f\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.910994 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-httpd-run\") pod \"e00027ea-a0b8-4406-bb9f-5583cbec970f\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.911143 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzqrf\" (UniqueName: \"kubernetes.io/projected/e00027ea-a0b8-4406-bb9f-5583cbec970f-kube-api-access-gzqrf\") pod \"e00027ea-a0b8-4406-bb9f-5583cbec970f\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.911170 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-scripts\") pod \"e00027ea-a0b8-4406-bb9f-5583cbec970f\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.911189 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-logs\") pod \"e00027ea-a0b8-4406-bb9f-5583cbec970f\" (UID: \"e00027ea-a0b8-4406-bb9f-5583cbec970f\") " Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.913902 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e00027ea-a0b8-4406-bb9f-5583cbec970f" (UID: "e00027ea-a0b8-4406-bb9f-5583cbec970f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.914276 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-logs" (OuterVolumeSpecName: "logs") pod "e00027ea-a0b8-4406-bb9f-5583cbec970f" (UID: "e00027ea-a0b8-4406-bb9f-5583cbec970f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.917401 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-scripts" (OuterVolumeSpecName: "scripts") pod "e00027ea-a0b8-4406-bb9f-5583cbec970f" (UID: "e00027ea-a0b8-4406-bb9f-5583cbec970f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.920882 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00027ea-a0b8-4406-bb9f-5583cbec970f-kube-api-access-gzqrf" (OuterVolumeSpecName: "kube-api-access-gzqrf") pod "e00027ea-a0b8-4406-bb9f-5583cbec970f" (UID: "e00027ea-a0b8-4406-bb9f-5583cbec970f"). InnerVolumeSpecName "kube-api-access-gzqrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.924760 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "e00027ea-a0b8-4406-bb9f-5583cbec970f" (UID: "e00027ea-a0b8-4406-bb9f-5583cbec970f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 12:33:39 crc kubenswrapper[4688]: I1125 12:33:39.973709 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e00027ea-a0b8-4406-bb9f-5583cbec970f" (UID: "e00027ea-a0b8-4406-bb9f-5583cbec970f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.012882 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzqrf\" (UniqueName: \"kubernetes.io/projected/e00027ea-a0b8-4406-bb9f-5583cbec970f-kube-api-access-gzqrf\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.012915 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.012926 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.012949 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.012960 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.012969 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e00027ea-a0b8-4406-bb9f-5583cbec970f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.014257 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-config-data" (OuterVolumeSpecName: "config-data") pod "e00027ea-a0b8-4406-bb9f-5583cbec970f" (UID: "e00027ea-a0b8-4406-bb9f-5583cbec970f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.023961 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e00027ea-a0b8-4406-bb9f-5583cbec970f" (UID: "e00027ea-a0b8-4406-bb9f-5583cbec970f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.039793 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.115236 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.115280 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.115294 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e00027ea-a0b8-4406-bb9f-5583cbec970f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.349614 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e00027ea-a0b8-4406-bb9f-5583cbec970f","Type":"ContainerDied","Data":"7a17f6459efdff3be73c6a636f407449b36e1274fc6bebedc7fd4657b25a0a5d"} Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.350154 4688 scope.go:117] "RemoveContainer" containerID="faa2eb11f3733696f3fc304f2b05d9101fe6abc330c685ea311d364f9d3bdbd9" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.350279 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.356441 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.400937 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.411749 4688 scope.go:117] "RemoveContainer" containerID="1ce3730692da953529a011ec8e45e6f64ac03aa179f7bf9e77ed09d1cfacb979" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.416680 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.457630 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:33:40 crc kubenswrapper[4688]: E1125 12:33:40.458022 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerName="glance-httpd" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.458035 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerName="glance-httpd" Nov 25 12:33:40 crc kubenswrapper[4688]: E1125 12:33:40.458072 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerName="glance-log" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.458079 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerName="glance-log" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.458248 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerName="glance-log" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.458270 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" containerName="glance-httpd" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.459312 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.463759 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.464005 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.510271 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.631719 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.631802 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.631835 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.631962 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.631987 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78n45\" (UniqueName: \"kubernetes.io/projected/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-kube-api-access-78n45\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.632065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.632104 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.632121 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737353 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737407 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737434 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737499 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737534 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737612 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737629 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78n45\" (UniqueName: \"kubernetes.io/projected/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-kube-api-access-78n45\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.737811 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.738460 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.739039 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.746358 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.749875 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.751161 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.767910 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78n45\" (UniqueName: \"kubernetes.io/projected/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-kube-api-access-78n45\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.772988 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274d19d3-bdcd-44c9-b44e-48f97d1dc4f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.773492 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e36d00-8496-4835-a57c-2cae27092645" path="/var/lib/kubelet/pods/55e36d00-8496-4835-a57c-2cae27092645/volumes" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.786487 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b" path="/var/lib/kubelet/pods/815ea1ee-ecb5-4cc3-8ce6-7dcea21b9d2b/volumes" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.800193 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b276d21b-cfa5-4b99-98f4-c75e85233b0c" path="/var/lib/kubelet/pods/b276d21b-cfa5-4b99-98f4-c75e85233b0c/volumes" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.803056 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00027ea-a0b8-4406-bb9f-5583cbec970f" path="/var/lib/kubelet/pods/e00027ea-a0b8-4406-bb9f-5583cbec970f/volumes" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.846758 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5\") " pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: I1125 12:33:40.902458 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:40 crc kubenswrapper[4688]: E1125 12:33:40.987258 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 12:33:41 crc kubenswrapper[4688]: E1125 12:33:41.007022 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 12:33:41 crc kubenswrapper[4688]: E1125 12:33:41.011856 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 12:33:41 crc kubenswrapper[4688]: E1125 12:33:41.011951 4688 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6885f88968-shb6s" podUID="de20cc3c-3de9-4e8a-97ba-203205cbb278" containerName="heat-engine" Nov 25 12:33:41 crc kubenswrapper[4688]: I1125 12:33:41.398714 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c","Type":"ContainerStarted","Data":"a98989574de66c5d8863fe20020d2b43bcdfecb397c8e426cd7921c6307f671c"} Nov 25 12:33:41 crc kubenswrapper[4688]: I1125 12:33:41.398976 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c","Type":"ContainerStarted","Data":"f94e1b98b4a54071851e9e1ffd01d00c1e858fc396c709beeb949da521fc4e5c"} Nov 25 12:33:41 crc kubenswrapper[4688]: I1125 12:33:41.585108 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 12:33:41 crc kubenswrapper[4688]: W1125 12:33:41.601082 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod274d19d3_bdcd_44c9_b44e_48f97d1dc4f5.slice/crio-e2f83ba73f8e1b04b7b2485af3c59b1db99ef5818ecc445d8302465032befccb WatchSource:0}: Error finding container e2f83ba73f8e1b04b7b2485af3c59b1db99ef5818ecc445d8302465032befccb: Status 404 returned error can't find the container with id e2f83ba73f8e1b04b7b2485af3c59b1db99ef5818ecc445d8302465032befccb Nov 25 12:33:42 crc kubenswrapper[4688]: I1125 12:33:42.415693 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5","Type":"ContainerStarted","Data":"e2f83ba73f8e1b04b7b2485af3c59b1db99ef5818ecc445d8302465032befccb"} Nov 25 12:33:43 crc kubenswrapper[4688]: I1125 12:33:43.427169 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c","Type":"ContainerStarted","Data":"ae1abcfbac7c45ba0ff2732fbf041c7c14a9a029ca4dce407e3dc94352c1bd88"} Nov 25 12:33:43 crc kubenswrapper[4688]: I1125 12:33:43.429779 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5","Type":"ContainerStarted","Data":"1195ade72bd5854adf0fcecb987c9dd1942a8d53252176c179607e7f328f735b"} Nov 25 12:33:43 crc kubenswrapper[4688]: I1125 12:33:43.452986 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.452962926 podStartE2EDuration="4.452962926s" podCreationTimestamp="2025-11-25 12:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:43.447611472 +0000 UTC m=+1173.557240340" watchObservedRunningTime="2025-11-25 12:33:43.452962926 +0000 UTC m=+1173.562591794" Nov 25 12:33:44 crc kubenswrapper[4688]: I1125 12:33:44.440805 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"274d19d3-bdcd-44c9-b44e-48f97d1dc4f5","Type":"ContainerStarted","Data":"77b7987c3b75d3ca8f119efa17695ace05f021a217b82a419b8a001fba6ad376"} Nov 25 12:33:44 crc kubenswrapper[4688]: I1125 12:33:44.469094 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.469073827 podStartE2EDuration="4.469073827s" podCreationTimestamp="2025-11-25 12:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:33:44.46539546 +0000 UTC m=+1174.575024328" watchObservedRunningTime="2025-11-25 12:33:44.469073827 +0000 UTC m=+1174.578702695" Nov 25 12:33:44 crc kubenswrapper[4688]: I1125 12:33:44.938113 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdzdj"] Nov 25 12:33:44 crc kubenswrapper[4688]: I1125 12:33:44.939378 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:44 crc kubenswrapper[4688]: I1125 12:33:44.941491 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jlmrt" Nov 25 12:33:44 crc kubenswrapper[4688]: I1125 12:33:44.941577 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 12:33:44 crc kubenswrapper[4688]: I1125 12:33:44.951904 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 12:33:44 crc kubenswrapper[4688]: I1125 12:33:44.954988 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdzdj"] Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.130247 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-config-data\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.130313 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqhz7\" (UniqueName: \"kubernetes.io/projected/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-kube-api-access-dqhz7\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.130366 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-scripts\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.130410 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.232506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-scripts\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.232596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.232806 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-config-data\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.232837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqhz7\" (UniqueName: \"kubernetes.io/projected/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-kube-api-access-dqhz7\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.238807 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.239424 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-config-data\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.241624 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-scripts\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.262592 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqhz7\" (UniqueName: \"kubernetes.io/projected/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-kube-api-access-dqhz7\") pod \"nova-cell0-conductor-db-sync-wdzdj\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:45 crc kubenswrapper[4688]: I1125 12:33:45.559688 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:33:46 crc kubenswrapper[4688]: I1125 12:33:46.053010 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdzdj"] Nov 25 12:33:46 crc kubenswrapper[4688]: W1125 12:33:46.069352 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42ae21eb_1699_4d0b_ba93_54b5d07e24ae.slice/crio-e6eaa0d40720273990335120015d145aa3069e51f2511d7098d91e9f1434baee WatchSource:0}: Error finding container e6eaa0d40720273990335120015d145aa3069e51f2511d7098d91e9f1434baee: Status 404 returned error can't find the container with id e6eaa0d40720273990335120015d145aa3069e51f2511d7098d91e9f1434baee Nov 25 12:33:46 crc kubenswrapper[4688]: I1125 12:33:46.469249 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" event={"ID":"42ae21eb-1699-4d0b-ba93-54b5d07e24ae","Type":"ContainerStarted","Data":"e6eaa0d40720273990335120015d145aa3069e51f2511d7098d91e9f1434baee"} Nov 25 12:33:47 crc kubenswrapper[4688]: I1125 12:33:47.482104 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerID="1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c" exitCode=0 Nov 25 12:33:47 crc kubenswrapper[4688]: I1125 12:33:47.482186 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerDied","Data":"1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c"} Nov 25 12:33:47 crc kubenswrapper[4688]: I1125 12:33:47.854677 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:33:47 crc kubenswrapper[4688]: I1125 12:33:47.854758 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:33:48 crc kubenswrapper[4688]: E1125 12:33:48.796545 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2052cd2_0348_4c12_ac00_b221c4fa8dcc.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:33:49 crc kubenswrapper[4688]: I1125 12:33:49.725268 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 12:33:49 crc kubenswrapper[4688]: I1125 12:33:49.725664 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 12:33:49 crc kubenswrapper[4688]: I1125 12:33:49.765338 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 12:33:49 crc kubenswrapper[4688]: I1125 12:33:49.773304 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 12:33:50 crc kubenswrapper[4688]: I1125 12:33:50.512268 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 12:33:50 crc kubenswrapper[4688]: I1125 12:33:50.512443 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 12:33:50 crc kubenswrapper[4688]: I1125 12:33:50.904015 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:50 crc kubenswrapper[4688]: I1125 12:33:50.904061 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:50 crc kubenswrapper[4688]: I1125 12:33:50.955175 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:50 crc kubenswrapper[4688]: I1125 12:33:50.955237 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:50 crc kubenswrapper[4688]: E1125 12:33:50.985377 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 12:33:50 crc kubenswrapper[4688]: E1125 12:33:50.986776 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 12:33:50 crc kubenswrapper[4688]: E1125 12:33:50.988032 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 12:33:50 crc kubenswrapper[4688]: E1125 12:33:50.988066 4688 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6885f88968-shb6s" podUID="de20cc3c-3de9-4e8a-97ba-203205cbb278" containerName="heat-engine" Nov 25 12:33:51 crc kubenswrapper[4688]: I1125 12:33:51.518753 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:51 crc kubenswrapper[4688]: I1125 12:33:51.519128 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:52 crc kubenswrapper[4688]: I1125 12:33:52.527121 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:33:52 crc kubenswrapper[4688]: I1125 12:33:52.528099 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:33:53 crc kubenswrapper[4688]: I1125 12:33:53.095934 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 12:33:53 crc kubenswrapper[4688]: I1125 12:33:53.134861 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 12:33:53 crc kubenswrapper[4688]: I1125 12:33:53.884373 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:53 crc kubenswrapper[4688]: I1125 12:33:53.884782 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 12:33:53 crc kubenswrapper[4688]: I1125 12:33:53.886875 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 12:33:54 crc kubenswrapper[4688]: I1125 12:33:54.552490 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" event={"ID":"42ae21eb-1699-4d0b-ba93-54b5d07e24ae","Type":"ContainerStarted","Data":"2494fb7005a764a03ef899080074d3a893f8baf4c55b965f3c396b3846d691c3"} Nov 25 12:33:54 crc kubenswrapper[4688]: I1125 12:33:54.574554 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" podStartSLOduration=2.69776905 podStartE2EDuration="10.574503106s" podCreationTimestamp="2025-11-25 12:33:44 +0000 UTC" firstStartedPulling="2025-11-25 12:33:46.072610336 +0000 UTC m=+1176.182239204" lastFinishedPulling="2025-11-25 12:33:53.949344392 +0000 UTC m=+1184.058973260" observedRunningTime="2025-11-25 12:33:54.566246695 +0000 UTC m=+1184.675875583" watchObservedRunningTime="2025-11-25 12:33:54.574503106 +0000 UTC m=+1184.684131994" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.405656 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.552603 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl4dv\" (UniqueName: \"kubernetes.io/projected/de20cc3c-3de9-4e8a-97ba-203205cbb278-kube-api-access-hl4dv\") pod \"de20cc3c-3de9-4e8a-97ba-203205cbb278\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.552712 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-combined-ca-bundle\") pod \"de20cc3c-3de9-4e8a-97ba-203205cbb278\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.552796 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data-custom\") pod \"de20cc3c-3de9-4e8a-97ba-203205cbb278\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.553022 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data\") pod \"de20cc3c-3de9-4e8a-97ba-203205cbb278\" (UID: \"de20cc3c-3de9-4e8a-97ba-203205cbb278\") " Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.559539 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de20cc3c-3de9-4e8a-97ba-203205cbb278-kube-api-access-hl4dv" (OuterVolumeSpecName: "kube-api-access-hl4dv") pod "de20cc3c-3de9-4e8a-97ba-203205cbb278" (UID: "de20cc3c-3de9-4e8a-97ba-203205cbb278"). InnerVolumeSpecName "kube-api-access-hl4dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.560093 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de20cc3c-3de9-4e8a-97ba-203205cbb278" (UID: "de20cc3c-3de9-4e8a-97ba-203205cbb278"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.594796 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de20cc3c-3de9-4e8a-97ba-203205cbb278" (UID: "de20cc3c-3de9-4e8a-97ba-203205cbb278"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.601857 4688 generic.go:334] "Generic (PLEG): container finished" podID="de20cc3c-3de9-4e8a-97ba-203205cbb278" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" exitCode=0 Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.601908 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6885f88968-shb6s" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.601906 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6885f88968-shb6s" event={"ID":"de20cc3c-3de9-4e8a-97ba-203205cbb278","Type":"ContainerDied","Data":"7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6"} Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.602084 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6885f88968-shb6s" event={"ID":"de20cc3c-3de9-4e8a-97ba-203205cbb278","Type":"ContainerDied","Data":"6f2fba3cfdb2428c1a2c85b33a80013fdf6664742d0258eb89fab1490a667509"} Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.602111 4688 scope.go:117] "RemoveContainer" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.623556 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data" (OuterVolumeSpecName: "config-data") pod "de20cc3c-3de9-4e8a-97ba-203205cbb278" (UID: "de20cc3c-3de9-4e8a-97ba-203205cbb278"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.655324 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.655366 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.655382 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl4dv\" (UniqueName: \"kubernetes.io/projected/de20cc3c-3de9-4e8a-97ba-203205cbb278-kube-api-access-hl4dv\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.655395 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de20cc3c-3de9-4e8a-97ba-203205cbb278-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.683174 4688 scope.go:117] "RemoveContainer" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" Nov 25 12:33:56 crc kubenswrapper[4688]: E1125 12:33:56.684455 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6\": container with ID starting with 7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6 not found: ID does not exist" containerID="7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.684504 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6"} err="failed to get container status \"7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6\": rpc error: code = NotFound desc = could not find container \"7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6\": container with ID starting with 7be1837ef6c9b6ba24346737930202b8efc9a1b0e97573471fb264b9d1685fc6 not found: ID does not exist" Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.926858 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6885f88968-shb6s"] Nov 25 12:33:56 crc kubenswrapper[4688]: I1125 12:33:56.936584 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6885f88968-shb6s"] Nov 25 12:33:58 crc kubenswrapper[4688]: I1125 12:33:58.761449 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de20cc3c-3de9-4e8a-97ba-203205cbb278" path="/var/lib/kubelet/pods/de20cc3c-3de9-4e8a-97ba-203205cbb278/volumes" Nov 25 12:33:59 crc kubenswrapper[4688]: E1125 12:33:59.055346 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2052cd2_0348_4c12_ac00_b221c4fa8dcc.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:34:02 crc kubenswrapper[4688]: I1125 12:34:02.235488 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 12:34:05 crc kubenswrapper[4688]: I1125 12:34:05.683787 4688 generic.go:334] "Generic (PLEG): container finished" podID="42ae21eb-1699-4d0b-ba93-54b5d07e24ae" containerID="2494fb7005a764a03ef899080074d3a893f8baf4c55b965f3c396b3846d691c3" exitCode=0 Nov 25 12:34:05 crc kubenswrapper[4688]: I1125 12:34:05.683848 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" event={"ID":"42ae21eb-1699-4d0b-ba93-54b5d07e24ae","Type":"ContainerDied","Data":"2494fb7005a764a03ef899080074d3a893f8baf4c55b965f3c396b3846d691c3"} Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.061390 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.243865 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-config-data\") pod \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.244088 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-scripts\") pod \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.244116 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqhz7\" (UniqueName: \"kubernetes.io/projected/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-kube-api-access-dqhz7\") pod \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.244169 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-combined-ca-bundle\") pod \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\" (UID: \"42ae21eb-1699-4d0b-ba93-54b5d07e24ae\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.252300 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-kube-api-access-dqhz7" (OuterVolumeSpecName: "kube-api-access-dqhz7") pod "42ae21eb-1699-4d0b-ba93-54b5d07e24ae" (UID: "42ae21eb-1699-4d0b-ba93-54b5d07e24ae"). InnerVolumeSpecName "kube-api-access-dqhz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.260975 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-scripts" (OuterVolumeSpecName: "scripts") pod "42ae21eb-1699-4d0b-ba93-54b5d07e24ae" (UID: "42ae21eb-1699-4d0b-ba93-54b5d07e24ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.288996 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42ae21eb-1699-4d0b-ba93-54b5d07e24ae" (UID: "42ae21eb-1699-4d0b-ba93-54b5d07e24ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.290590 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-config-data" (OuterVolumeSpecName: "config-data") pod "42ae21eb-1699-4d0b-ba93-54b5d07e24ae" (UID: "42ae21eb-1699-4d0b-ba93-54b5d07e24ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.346851 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.346896 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.346908 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.346920 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqhz7\" (UniqueName: \"kubernetes.io/projected/42ae21eb-1699-4d0b-ba93-54b5d07e24ae-kube-api-access-dqhz7\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.403107 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.571949 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-scripts\") pod \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.572002 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-combined-ca-bundle\") pod \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.572072 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-sg-core-conf-yaml\") pod \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.572091 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-config-data\") pod \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.572155 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxdjz\" (UniqueName: \"kubernetes.io/projected/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-kube-api-access-vxdjz\") pod \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.572183 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-run-httpd\") pod \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.572986 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-log-httpd\") pod \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\" (UID: \"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3\") " Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.573020 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" (UID: "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.573348 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" (UID: "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.573540 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.573563 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.576154 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-kube-api-access-vxdjz" (OuterVolumeSpecName: "kube-api-access-vxdjz") pod "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" (UID: "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3"). InnerVolumeSpecName "kube-api-access-vxdjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.576408 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-scripts" (OuterVolumeSpecName: "scripts") pod "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" (UID: "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.604374 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" (UID: "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.644958 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" (UID: "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.675656 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.675686 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.675698 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.675707 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxdjz\" (UniqueName: \"kubernetes.io/projected/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-kube-api-access-vxdjz\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.676470 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-config-data" (OuterVolumeSpecName: "config-data") pod "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" (UID: "f2ee67ea-08bb-451c-81ad-dd8c579dc5b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.710164 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" event={"ID":"42ae21eb-1699-4d0b-ba93-54b5d07e24ae","Type":"ContainerDied","Data":"e6eaa0d40720273990335120015d145aa3069e51f2511d7098d91e9f1434baee"} Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.710205 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6eaa0d40720273990335120015d145aa3069e51f2511d7098d91e9f1434baee" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.710268 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wdzdj" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.719008 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerID="5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f" exitCode=137 Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.719055 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerDied","Data":"5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f"} Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.719090 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2ee67ea-08bb-451c-81ad-dd8c579dc5b3","Type":"ContainerDied","Data":"57bf83ba0b37a8a27b2b9dce3462ee369e663d44766ecbaaeeb6e453d1311bbc"} Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.719112 4688 scope.go:117] "RemoveContainer" containerID="5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.719257 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.750329 4688 scope.go:117] "RemoveContainer" containerID="dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.777439 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.778351 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.789203 4688 scope.go:117] "RemoveContainer" containerID="8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.792828 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.802959 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.803463 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="proxy-httpd" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803486 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="proxy-httpd" Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.803501 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="ceilometer-notification-agent" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803509 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="ceilometer-notification-agent" Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.803544 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de20cc3c-3de9-4e8a-97ba-203205cbb278" containerName="heat-engine" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803553 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="de20cc3c-3de9-4e8a-97ba-203205cbb278" containerName="heat-engine" Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.803577 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="ceilometer-central-agent" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803587 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="ceilometer-central-agent" Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.803606 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="sg-core" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803613 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="sg-core" Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.803627 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ae21eb-1699-4d0b-ba93-54b5d07e24ae" containerName="nova-cell0-conductor-db-sync" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803634 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ae21eb-1699-4d0b-ba93-54b5d07e24ae" containerName="nova-cell0-conductor-db-sync" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803854 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="ceilometer-notification-agent" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803869 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="sg-core" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803883 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="de20cc3c-3de9-4e8a-97ba-203205cbb278" containerName="heat-engine" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803892 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="proxy-httpd" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803913 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" containerName="ceilometer-central-agent" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.803930 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ae21eb-1699-4d0b-ba93-54b5d07e24ae" containerName="nova-cell0-conductor-db-sync" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.818537 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.822739 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.823151 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.849903 4688 scope.go:117] "RemoveContainer" containerID="1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.850383 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.875953 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.879712 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.882720 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.882805 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-scripts\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.883248 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-run-httpd\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.883537 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.883932 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jlmrt" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.884919 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.884980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-config-data\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.885001 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.885024 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/135ed56b-7e5c-41e7-a254-ab35c678cb20-kube-api-access-fjqm2\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.885058 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wznf\" (UniqueName: \"kubernetes.io/projected/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-kube-api-access-4wznf\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.885145 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-log-httpd\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.885226 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.888761 4688 scope.go:117] "RemoveContainer" containerID="5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.888893 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.893809 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f\": container with ID starting with 5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f not found: ID does not exist" containerID="5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.893858 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f"} err="failed to get container status \"5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f\": rpc error: code = NotFound desc = could not find container \"5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f\": container with ID starting with 5164582162a449928a30188dfbd129d34d0385d6c329a006da3ec4507f76312f not found: ID does not exist" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.893888 4688 scope.go:117] "RemoveContainer" containerID="dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44" Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.894906 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44\": container with ID starting with dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44 not found: ID does not exist" containerID="dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.894976 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44"} err="failed to get container status \"dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44\": rpc error: code = NotFound desc = could not find container \"dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44\": container with ID starting with dabde17edfeb843918ac10e0f8c115acdded10c0d9e45cfb1a8a8906bef19f44 not found: ID does not exist" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.895018 4688 scope.go:117] "RemoveContainer" containerID="8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233" Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.895471 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233\": container with ID starting with 8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233 not found: ID does not exist" containerID="8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.895516 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233"} err="failed to get container status \"8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233\": rpc error: code = NotFound desc = could not find container \"8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233\": container with ID starting with 8517f99260597a9aec164afabb96a93e6d5066b274387d14711b0690ebb8e233 not found: ID does not exist" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.895570 4688 scope.go:117] "RemoveContainer" containerID="1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c" Nov 25 12:34:07 crc kubenswrapper[4688]: E1125 12:34:07.895997 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c\": container with ID starting with 1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c not found: ID does not exist" containerID="1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.896036 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c"} err="failed to get container status \"1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c\": rpc error: code = NotFound desc = could not find container \"1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c\": container with ID starting with 1af6c963dbbb7e637e6c95ba01678efd788cbf57c757583f82694f1e7dcda66c not found: ID does not exist" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988063 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988148 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-scripts\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988167 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-run-httpd\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988209 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988238 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-config-data\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988258 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988277 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/135ed56b-7e5c-41e7-a254-ab35c678cb20-kube-api-access-fjqm2\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988310 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wznf\" (UniqueName: \"kubernetes.io/projected/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-kube-api-access-4wznf\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988374 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-log-httpd\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.988429 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.989545 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-log-httpd\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.989896 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-run-httpd\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.994115 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-config-data\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.994427 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:07 crc kubenswrapper[4688]: I1125 12:34:07.998459 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.000144 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.005432 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.006781 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-scripts\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.010393 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wznf\" (UniqueName: \"kubernetes.io/projected/7dd18c71-cfd7-4552-ab55-c0f00f1a5c46-kube-api-access-4wznf\") pod \"nova-cell0-conductor-0\" (UID: \"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46\") " pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.012348 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/135ed56b-7e5c-41e7-a254-ab35c678cb20-kube-api-access-fjqm2\") pod \"ceilometer-0\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " pod="openstack/ceilometer-0" Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.151493 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.199952 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.690603 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:08 crc kubenswrapper[4688]: W1125 12:34:08.702653 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod135ed56b_7e5c_41e7_a254_ab35c678cb20.slice/crio-7cfad5f4f1c321598b57f82b45b0dab6adeb408d4be7ac21eab0a0e8125c1eb7 WatchSource:0}: Error finding container 7cfad5f4f1c321598b57f82b45b0dab6adeb408d4be7ac21eab0a0e8125c1eb7: Status 404 returned error can't find the container with id 7cfad5f4f1c321598b57f82b45b0dab6adeb408d4be7ac21eab0a0e8125c1eb7 Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.733010 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerStarted","Data":"7cfad5f4f1c321598b57f82b45b0dab6adeb408d4be7ac21eab0a0e8125c1eb7"} Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.754851 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2ee67ea-08bb-451c-81ad-dd8c579dc5b3" path="/var/lib/kubelet/pods/f2ee67ea-08bb-451c-81ad-dd8c579dc5b3/volumes" Nov 25 12:34:08 crc kubenswrapper[4688]: W1125 12:34:08.759723 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd18c71_cfd7_4552_ab55_c0f00f1a5c46.slice/crio-02350ac53ceeec974cbd9bfd9c4a5d0c452e118a9be088e3f5cb5cf6291d6da7 WatchSource:0}: Error finding container 02350ac53ceeec974cbd9bfd9c4a5d0c452e118a9be088e3f5cb5cf6291d6da7: Status 404 returned error can't find the container with id 02350ac53ceeec974cbd9bfd9c4a5d0c452e118a9be088e3f5cb5cf6291d6da7 Nov 25 12:34:08 crc kubenswrapper[4688]: I1125 12:34:08.759833 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 12:34:09 crc kubenswrapper[4688]: E1125 12:34:09.284030 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2052cd2_0348_4c12_ac00_b221c4fa8dcc.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:34:09 crc kubenswrapper[4688]: I1125 12:34:09.752394 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46","Type":"ContainerStarted","Data":"a3f7543aecfb6dac254d6e2b75975586774d8248597f872be450eaf33742c9e7"} Nov 25 12:34:09 crc kubenswrapper[4688]: I1125 12:34:09.752785 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7dd18c71-cfd7-4552-ab55-c0f00f1a5c46","Type":"ContainerStarted","Data":"02350ac53ceeec974cbd9bfd9c4a5d0c452e118a9be088e3f5cb5cf6291d6da7"} Nov 25 12:34:09 crc kubenswrapper[4688]: I1125 12:34:09.754275 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:09 crc kubenswrapper[4688]: I1125 12:34:09.771949 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.771928582 podStartE2EDuration="2.771928582s" podCreationTimestamp="2025-11-25 12:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:09.768308365 +0000 UTC m=+1199.877937233" watchObservedRunningTime="2025-11-25 12:34:09.771928582 +0000 UTC m=+1199.881557440" Nov 25 12:34:10 crc kubenswrapper[4688]: I1125 12:34:10.768210 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerStarted","Data":"94570ff0dabc8a83fc39e32624a577ae73249c58ab9fe85d2e71b87c06a0cc41"} Nov 25 12:34:10 crc kubenswrapper[4688]: I1125 12:34:10.768768 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerStarted","Data":"02938b14718d0dfb16c296cdcc7ee4bc30389cd98b85174bad137942495a5a0d"} Nov 25 12:34:11 crc kubenswrapper[4688]: I1125 12:34:11.779650 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerStarted","Data":"b11bd8d39f2bb9bcc16a1abe8f36c68bdf52abc29c128751a7fab19498c8e7b9"} Nov 25 12:34:12 crc kubenswrapper[4688]: I1125 12:34:12.795850 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerStarted","Data":"013ffdba8f62b5db73f21320b636748281a50ac1cd90d3983788b481787a4a7c"} Nov 25 12:34:12 crc kubenswrapper[4688]: I1125 12:34:12.796499 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 12:34:12 crc kubenswrapper[4688]: I1125 12:34:12.819358 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.109755871 podStartE2EDuration="5.819334975s" podCreationTimestamp="2025-11-25 12:34:07 +0000 UTC" firstStartedPulling="2025-11-25 12:34:08.706107112 +0000 UTC m=+1198.815735970" lastFinishedPulling="2025-11-25 12:34:12.415686206 +0000 UTC m=+1202.525315074" observedRunningTime="2025-11-25 12:34:12.816610082 +0000 UTC m=+1202.926238960" watchObservedRunningTime="2025-11-25 12:34:12.819334975 +0000 UTC m=+1202.928963843" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.229676 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.666392 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vjwr8"] Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.667479 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.700773 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.701086 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.736607 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vjwr8"] Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.744295 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.744362 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-scripts\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.744431 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zll7p\" (UniqueName: \"kubernetes.io/projected/e137d392-3d34-468e-8a68-ed64665b2200-kube-api-access-zll7p\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.744475 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-config-data\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.857959 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.858037 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-scripts\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.858108 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zll7p\" (UniqueName: \"kubernetes.io/projected/e137d392-3d34-468e-8a68-ed64665b2200-kube-api-access-zll7p\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.858151 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-config-data\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.866261 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-config-data\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.872895 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-scripts\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.877223 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.900161 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zll7p\" (UniqueName: \"kubernetes.io/projected/e137d392-3d34-468e-8a68-ed64665b2200-kube-api-access-zll7p\") pod \"nova-cell0-cell-mapping-vjwr8\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:13 crc kubenswrapper[4688]: I1125 12:34:13.993954 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.029589 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.031023 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.040932 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.080416 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.131648 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.132910 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.146246 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.171595 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.180371 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs75f\" (UniqueName: \"kubernetes.io/projected/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-kube-api-access-zs75f\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.180480 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.180562 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-config-data\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.232101 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.234206 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.240201 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.254029 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.282305 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m6xh\" (UniqueName: \"kubernetes.io/projected/c27defb6-83ca-455c-a198-6fc77fe0f901-kube-api-access-4m6xh\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.282357 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.282391 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs75f\" (UniqueName: \"kubernetes.io/projected/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-kube-api-access-zs75f\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.282445 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.282477 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.282503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-config-data\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.288121 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-config-data\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.288248 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.295180 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.297103 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.324424 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.330362 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs75f\" (UniqueName: \"kubernetes.io/projected/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-kube-api-access-zs75f\") pod \"nova-scheduler-0\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.330746 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.374584 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xthf7"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.376125 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.387771 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m6xh\" (UniqueName: \"kubernetes.io/projected/c27defb6-83ca-455c-a198-6fc77fe0f901-kube-api-access-4m6xh\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.387836 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.387901 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d9300e0-f4d8-43cf-9595-ede2e45afede-logs\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.387948 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsj7d\" (UniqueName: \"kubernetes.io/projected/3d9300e0-f4d8-43cf-9595-ede2e45afede-kube-api-access-bsj7d\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.387994 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.388024 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-config-data\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.388063 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.394588 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xthf7"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.408412 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.411851 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.424308 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m6xh\" (UniqueName: \"kubernetes.io/projected/c27defb6-83ca-455c-a198-6fc77fe0f901-kube-api-access-4m6xh\") pod \"nova-cell1-novncproxy-0\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.478160 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.489648 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-config\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.489725 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-config-data\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.489759 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pnq2\" (UniqueName: \"kubernetes.io/projected/a18b233c-925e-49dc-ad9b-71829a24c0f5-kube-api-access-7pnq2\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.489816 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.489862 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.489899 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a18b233c-925e-49dc-ad9b-71829a24c0f5-logs\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.489934 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glwb\" (UniqueName: \"kubernetes.io/projected/f448e93d-acc9-49aa-9404-bd886cd6a4f1-kube-api-access-2glwb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.489968 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d9300e0-f4d8-43cf-9595-ede2e45afede-logs\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.490009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.490044 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsj7d\" (UniqueName: \"kubernetes.io/projected/3d9300e0-f4d8-43cf-9595-ede2e45afede-kube-api-access-bsj7d\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.490081 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.490122 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-config-data\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.490158 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-svc\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.490185 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.495652 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d9300e0-f4d8-43cf-9595-ede2e45afede-logs\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.499711 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-config-data\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.499788 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.500020 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.509700 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsj7d\" (UniqueName: \"kubernetes.io/projected/3d9300e0-f4d8-43cf-9595-ede2e45afede-kube-api-access-bsj7d\") pod \"nova-api-0\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.594908 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2glwb\" (UniqueName: \"kubernetes.io/projected/f448e93d-acc9-49aa-9404-bd886cd6a4f1-kube-api-access-2glwb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.595266 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.595323 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.595404 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-svc\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.595439 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-config\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.595505 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-config-data\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.595587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pnq2\" (UniqueName: \"kubernetes.io/projected/a18b233c-925e-49dc-ad9b-71829a24c0f5-kube-api-access-7pnq2\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.595611 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.595989 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.596036 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a18b233c-925e-49dc-ad9b-71829a24c0f5-logs\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.596796 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.596869 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-svc\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.597012 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-config\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.597237 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a18b233c-925e-49dc-ad9b-71829a24c0f5-logs\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.597442 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.597784 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.610514 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-config-data\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.613574 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.615817 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glwb\" (UniqueName: \"kubernetes.io/projected/f448e93d-acc9-49aa-9404-bd886cd6a4f1-kube-api-access-2glwb\") pod \"dnsmasq-dns-9b86998b5-xthf7\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.617082 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pnq2\" (UniqueName: \"kubernetes.io/projected/a18b233c-925e-49dc-ad9b-71829a24c0f5-kube-api-access-7pnq2\") pod \"nova-metadata-0\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.650414 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.670351 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.718675 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.732692 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vjwr8"] Nov 25 12:34:14 crc kubenswrapper[4688]: I1125 12:34:14.876967 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vjwr8" event={"ID":"e137d392-3d34-468e-8a68-ed64665b2200","Type":"ContainerStarted","Data":"228f6bae2ff7e04c9c1314c5bc76a469739cf109dbb2b662dcdc615a36ec384f"} Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.051946 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.084338 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hjtmq"] Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.085609 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.088983 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.089163 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.100081 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hjtmq"] Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.194188 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.217681 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-config-data\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.217771 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-scripts\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.217814 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.217839 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvkm\" (UniqueName: \"kubernetes.io/projected/2f89f99a-e930-449e-8508-0c15309f5b8b-kube-api-access-mfvkm\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.320032 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-scripts\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.320126 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.320155 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfvkm\" (UniqueName: \"kubernetes.io/projected/2f89f99a-e930-449e-8508-0c15309f5b8b-kube-api-access-mfvkm\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.320303 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-config-data\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.327671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-config-data\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.331040 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-scripts\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.344158 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfvkm\" (UniqueName: \"kubernetes.io/projected/2f89f99a-e930-449e-8508-0c15309f5b8b-kube-api-access-mfvkm\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.354370 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hjtmq\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.449163 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.472958 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.491087 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xthf7"] Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.538817 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.932593 4688 generic.go:334] "Generic (PLEG): container finished" podID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerID="7d06d41e19637524ec089b74f42de5c7a41513eb83f281873bb80b907cf98f6d" exitCode=0 Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.932878 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" event={"ID":"f448e93d-acc9-49aa-9404-bd886cd6a4f1","Type":"ContainerDied","Data":"7d06d41e19637524ec089b74f42de5c7a41513eb83f281873bb80b907cf98f6d"} Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.933237 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" event={"ID":"f448e93d-acc9-49aa-9404-bd886cd6a4f1","Type":"ContainerStarted","Data":"451194fd789b16f6dec28fd23c19bc868813dc10bbdd28782abc1f8730be9dfe"} Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.965295 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"63f35cfc-4bce-46a8-a1d3-29fa9317eae1","Type":"ContainerStarted","Data":"d2736e40cc2c163254f09109bd8c7b285ae839eab73a23eafea736356ebe8164"} Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.979386 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c27defb6-83ca-455c-a198-6fc77fe0f901","Type":"ContainerStarted","Data":"d5b4b6ae2e933b1a7ad453c431c0590902fcccdf63df22d7f2c22dd90be781a4"} Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.981387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a18b233c-925e-49dc-ad9b-71829a24c0f5","Type":"ContainerStarted","Data":"2980e0f5fbe18a2797ab4747b5af4c2b27a1ad84cab718bed9024ed87e1d0528"} Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.983022 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vjwr8" event={"ID":"e137d392-3d34-468e-8a68-ed64665b2200","Type":"ContainerStarted","Data":"4d16272065b827d7357ad6efad232e90ceff1ea628372978f864a8b0e7968a62"} Nov 25 12:34:15 crc kubenswrapper[4688]: I1125 12:34:15.985753 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d9300e0-f4d8-43cf-9595-ede2e45afede","Type":"ContainerStarted","Data":"040e90b744fe07ea005d903b6824c7e7c72cf6d1ef3500ac2728abe69f90e908"} Nov 25 12:34:16 crc kubenswrapper[4688]: I1125 12:34:16.022557 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vjwr8" podStartSLOduration=3.022510867 podStartE2EDuration="3.022510867s" podCreationTimestamp="2025-11-25 12:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:16.002186294 +0000 UTC m=+1206.111815162" watchObservedRunningTime="2025-11-25 12:34:16.022510867 +0000 UTC m=+1206.132139745" Nov 25 12:34:16 crc kubenswrapper[4688]: I1125 12:34:16.175819 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hjtmq"] Nov 25 12:34:16 crc kubenswrapper[4688]: W1125 12:34:16.234923 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f89f99a_e930_449e_8508_0c15309f5b8b.slice/crio-89d17e67e3a4a6ab93bc3b4da73a5d250ebf7fc4bea0cbba80b98858c8f79949 WatchSource:0}: Error finding container 89d17e67e3a4a6ab93bc3b4da73a5d250ebf7fc4bea0cbba80b98858c8f79949: Status 404 returned error can't find the container with id 89d17e67e3a4a6ab93bc3b4da73a5d250ebf7fc4bea0cbba80b98858c8f79949 Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.005933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" event={"ID":"f448e93d-acc9-49aa-9404-bd886cd6a4f1","Type":"ContainerStarted","Data":"4e8f5286297dc6037bb794e8ce27c5de5abcddb55177009389463a32438bea08"} Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.006275 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.018757 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" event={"ID":"2f89f99a-e930-449e-8508-0c15309f5b8b","Type":"ContainerStarted","Data":"f5e3d624a85011ec7e2c59e9ba06c08c5d17be8dd9c2e29db7d8cb487699bf72"} Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.018809 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" event={"ID":"2f89f99a-e930-449e-8508-0c15309f5b8b","Type":"ContainerStarted","Data":"89d17e67e3a4a6ab93bc3b4da73a5d250ebf7fc4bea0cbba80b98858c8f79949"} Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.043467 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" podStartSLOduration=3.043443318 podStartE2EDuration="3.043443318s" podCreationTimestamp="2025-11-25 12:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:17.027453581 +0000 UTC m=+1207.137082469" watchObservedRunningTime="2025-11-25 12:34:17.043443318 +0000 UTC m=+1207.153072186" Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.052610 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" podStartSLOduration=2.05250561 podStartE2EDuration="2.05250561s" podCreationTimestamp="2025-11-25 12:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:17.042105142 +0000 UTC m=+1207.151734010" watchObservedRunningTime="2025-11-25 12:34:17.05250561 +0000 UTC m=+1207.162134488" Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.853602 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.853664 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:34:17 crc kubenswrapper[4688]: I1125 12:34:17.993657 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:18 crc kubenswrapper[4688]: I1125 12:34:18.000156 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.052454 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"63f35cfc-4bce-46a8-a1d3-29fa9317eae1","Type":"ContainerStarted","Data":"a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f"} Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.056497 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c27defb6-83ca-455c-a198-6fc77fe0f901","Type":"ContainerStarted","Data":"75cf2ccc4f8b654add849ae66a3331317f5b0939f81571bde99647f759ec8b17"} Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.056651 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c27defb6-83ca-455c-a198-6fc77fe0f901" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://75cf2ccc4f8b654add849ae66a3331317f5b0939f81571bde99647f759ec8b17" gracePeriod=30 Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.060699 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a18b233c-925e-49dc-ad9b-71829a24c0f5","Type":"ContainerStarted","Data":"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c"} Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.060736 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a18b233c-925e-49dc-ad9b-71829a24c0f5","Type":"ContainerStarted","Data":"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05"} Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.060819 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerName="nova-metadata-log" containerID="cri-o://70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05" gracePeriod=30 Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.060841 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerName="nova-metadata-metadata" containerID="cri-o://6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c" gracePeriod=30 Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.064412 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d9300e0-f4d8-43cf-9595-ede2e45afede","Type":"ContainerStarted","Data":"e56db0d14c303dbafc2a77b44d0607ebdb3e7a4a8d16655728a87c1b94cb9cc1"} Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.064456 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d9300e0-f4d8-43cf-9595-ede2e45afede","Type":"ContainerStarted","Data":"186f76a5c036dc3717e5d35eb54fd7b632592c21aeb309c35cc11e5214b4f312"} Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.082266 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.457549713 podStartE2EDuration="7.082244071s" podCreationTimestamp="2025-11-25 12:34:13 +0000 UTC" firstStartedPulling="2025-11-25 12:34:15.092398271 +0000 UTC m=+1205.202027139" lastFinishedPulling="2025-11-25 12:34:18.717092629 +0000 UTC m=+1208.826721497" observedRunningTime="2025-11-25 12:34:20.076201799 +0000 UTC m=+1210.185830657" watchObservedRunningTime="2025-11-25 12:34:20.082244071 +0000 UTC m=+1210.191872939" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.104957 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.845721488 podStartE2EDuration="6.104927547s" podCreationTimestamp="2025-11-25 12:34:14 +0000 UTC" firstStartedPulling="2025-11-25 12:34:15.46277855 +0000 UTC m=+1205.572407418" lastFinishedPulling="2025-11-25 12:34:18.721984609 +0000 UTC m=+1208.831613477" observedRunningTime="2025-11-25 12:34:20.094251451 +0000 UTC m=+1210.203880319" watchObservedRunningTime="2025-11-25 12:34:20.104927547 +0000 UTC m=+1210.214556405" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.113721 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.874756065 podStartE2EDuration="6.113694621s" podCreationTimestamp="2025-11-25 12:34:14 +0000 UTC" firstStartedPulling="2025-11-25 12:34:15.481034159 +0000 UTC m=+1205.590663017" lastFinishedPulling="2025-11-25 12:34:18.719972705 +0000 UTC m=+1208.829601573" observedRunningTime="2025-11-25 12:34:20.111135662 +0000 UTC m=+1210.220764540" watchObservedRunningTime="2025-11-25 12:34:20.113694621 +0000 UTC m=+1210.223323489" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.134592 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.6144189620000002 podStartE2EDuration="6.134573508s" podCreationTimestamp="2025-11-25 12:34:14 +0000 UTC" firstStartedPulling="2025-11-25 12:34:15.198961056 +0000 UTC m=+1205.308589924" lastFinishedPulling="2025-11-25 12:34:18.719115602 +0000 UTC m=+1208.828744470" observedRunningTime="2025-11-25 12:34:20.128797164 +0000 UTC m=+1210.238426052" watchObservedRunningTime="2025-11-25 12:34:20.134573508 +0000 UTC m=+1210.244202366" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.729943 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.863626 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a18b233c-925e-49dc-ad9b-71829a24c0f5-logs\") pod \"a18b233c-925e-49dc-ad9b-71829a24c0f5\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.863709 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-combined-ca-bundle\") pod \"a18b233c-925e-49dc-ad9b-71829a24c0f5\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.863836 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-config-data\") pod \"a18b233c-925e-49dc-ad9b-71829a24c0f5\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.863895 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pnq2\" (UniqueName: \"kubernetes.io/projected/a18b233c-925e-49dc-ad9b-71829a24c0f5-kube-api-access-7pnq2\") pod \"a18b233c-925e-49dc-ad9b-71829a24c0f5\" (UID: \"a18b233c-925e-49dc-ad9b-71829a24c0f5\") " Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.863985 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a18b233c-925e-49dc-ad9b-71829a24c0f5-logs" (OuterVolumeSpecName: "logs") pod "a18b233c-925e-49dc-ad9b-71829a24c0f5" (UID: "a18b233c-925e-49dc-ad9b-71829a24c0f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.864578 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a18b233c-925e-49dc-ad9b-71829a24c0f5-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.874243 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18b233c-925e-49dc-ad9b-71829a24c0f5-kube-api-access-7pnq2" (OuterVolumeSpecName: "kube-api-access-7pnq2") pod "a18b233c-925e-49dc-ad9b-71829a24c0f5" (UID: "a18b233c-925e-49dc-ad9b-71829a24c0f5"). InnerVolumeSpecName "kube-api-access-7pnq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.894769 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-config-data" (OuterVolumeSpecName: "config-data") pod "a18b233c-925e-49dc-ad9b-71829a24c0f5" (UID: "a18b233c-925e-49dc-ad9b-71829a24c0f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.903470 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a18b233c-925e-49dc-ad9b-71829a24c0f5" (UID: "a18b233c-925e-49dc-ad9b-71829a24c0f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.966270 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.966310 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pnq2\" (UniqueName: \"kubernetes.io/projected/a18b233c-925e-49dc-ad9b-71829a24c0f5-kube-api-access-7pnq2\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:20 crc kubenswrapper[4688]: I1125 12:34:20.966322 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18b233c-925e-49dc-ad9b-71829a24c0f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.073673 4688 generic.go:334] "Generic (PLEG): container finished" podID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerID="6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c" exitCode=0 Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.073702 4688 generic.go:334] "Generic (PLEG): container finished" podID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerID="70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05" exitCode=143 Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.074572 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.077899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a18b233c-925e-49dc-ad9b-71829a24c0f5","Type":"ContainerDied","Data":"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c"} Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.077940 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a18b233c-925e-49dc-ad9b-71829a24c0f5","Type":"ContainerDied","Data":"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05"} Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.077960 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a18b233c-925e-49dc-ad9b-71829a24c0f5","Type":"ContainerDied","Data":"2980e0f5fbe18a2797ab4747b5af4c2b27a1ad84cab718bed9024ed87e1d0528"} Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.077978 4688 scope.go:117] "RemoveContainer" containerID="6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.108173 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.112895 4688 scope.go:117] "RemoveContainer" containerID="70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.117480 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.137574 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:21 crc kubenswrapper[4688]: E1125 12:34:21.138149 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerName="nova-metadata-log" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.138176 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerName="nova-metadata-log" Nov 25 12:34:21 crc kubenswrapper[4688]: E1125 12:34:21.138228 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerName="nova-metadata-metadata" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.138240 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerName="nova-metadata-metadata" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.138462 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerName="nova-metadata-log" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.138496 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" containerName="nova-metadata-metadata" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.140427 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.141310 4688 scope.go:117] "RemoveContainer" containerID="6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c" Nov 25 12:34:21 crc kubenswrapper[4688]: E1125 12:34:21.141904 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c\": container with ID starting with 6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c not found: ID does not exist" containerID="6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.141941 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c"} err="failed to get container status \"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c\": rpc error: code = NotFound desc = could not find container \"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c\": container with ID starting with 6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c not found: ID does not exist" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.141967 4688 scope.go:117] "RemoveContainer" containerID="70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05" Nov 25 12:34:21 crc kubenswrapper[4688]: E1125 12:34:21.142429 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05\": container with ID starting with 70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05 not found: ID does not exist" containerID="70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.142487 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05"} err="failed to get container status \"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05\": rpc error: code = NotFound desc = could not find container \"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05\": container with ID starting with 70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05 not found: ID does not exist" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.142553 4688 scope.go:117] "RemoveContainer" containerID="6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.142854 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c"} err="failed to get container status \"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c\": rpc error: code = NotFound desc = could not find container \"6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c\": container with ID starting with 6f61ebba3b6da128732df357165e10117334bb9dd68411c0b60668c1c6d6858c not found: ID does not exist" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.142874 4688 scope.go:117] "RemoveContainer" containerID="70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.143128 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05"} err="failed to get container status \"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05\": rpc error: code = NotFound desc = could not find container \"70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05\": container with ID starting with 70403c673b05fd4ac0b16fe57ea11898444ce3e1272ec8f7b269518e86254c05 not found: ID does not exist" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.146352 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.146589 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.150168 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.289996 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.290076 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-config-data\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.290113 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.290131 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81556b99-5028-4a35-b14d-337f450cb8f4-logs\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.290162 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9h5p\" (UniqueName: \"kubernetes.io/projected/81556b99-5028-4a35-b14d-337f450cb8f4-kube-api-access-m9h5p\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.391572 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.391636 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-config-data\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.391669 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.391687 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81556b99-5028-4a35-b14d-337f450cb8f4-logs\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.391717 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9h5p\" (UniqueName: \"kubernetes.io/projected/81556b99-5028-4a35-b14d-337f450cb8f4-kube-api-access-m9h5p\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.393785 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81556b99-5028-4a35-b14d-337f450cb8f4-logs\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.405087 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.405404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.405796 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-config-data\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.409610 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9h5p\" (UniqueName: \"kubernetes.io/projected/81556b99-5028-4a35-b14d-337f450cb8f4-kube-api-access-m9h5p\") pod \"nova-metadata-0\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.474654 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:21 crc kubenswrapper[4688]: I1125 12:34:21.938568 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:21 crc kubenswrapper[4688]: W1125 12:34:21.939450 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81556b99_5028_4a35_b14d_337f450cb8f4.slice/crio-e36501375c4e3bd4572a1aa9367ed6ec92f547701a1463158ce8d1f36637dcfd WatchSource:0}: Error finding container e36501375c4e3bd4572a1aa9367ed6ec92f547701a1463158ce8d1f36637dcfd: Status 404 returned error can't find the container with id e36501375c4e3bd4572a1aa9367ed6ec92f547701a1463158ce8d1f36637dcfd Nov 25 12:34:22 crc kubenswrapper[4688]: I1125 12:34:22.082629 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81556b99-5028-4a35-b14d-337f450cb8f4","Type":"ContainerStarted","Data":"e36501375c4e3bd4572a1aa9367ed6ec92f547701a1463158ce8d1f36637dcfd"} Nov 25 12:34:22 crc kubenswrapper[4688]: I1125 12:34:22.750902 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a18b233c-925e-49dc-ad9b-71829a24c0f5" path="/var/lib/kubelet/pods/a18b233c-925e-49dc-ad9b-71829a24c0f5/volumes" Nov 25 12:34:23 crc kubenswrapper[4688]: I1125 12:34:23.095799 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81556b99-5028-4a35-b14d-337f450cb8f4","Type":"ContainerStarted","Data":"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b"} Nov 25 12:34:23 crc kubenswrapper[4688]: I1125 12:34:23.096120 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81556b99-5028-4a35-b14d-337f450cb8f4","Type":"ContainerStarted","Data":"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1"} Nov 25 12:34:23 crc kubenswrapper[4688]: I1125 12:34:23.121006 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.120986662 podStartE2EDuration="2.120986662s" podCreationTimestamp="2025-11-25 12:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:23.115741012 +0000 UTC m=+1213.225369880" watchObservedRunningTime="2025-11-25 12:34:23.120986662 +0000 UTC m=+1213.230615530" Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.106321 4688 generic.go:334] "Generic (PLEG): container finished" podID="2f89f99a-e930-449e-8508-0c15309f5b8b" containerID="f5e3d624a85011ec7e2c59e9ba06c08c5d17be8dd9c2e29db7d8cb487699bf72" exitCode=0 Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.106402 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" event={"ID":"2f89f99a-e930-449e-8508-0c15309f5b8b","Type":"ContainerDied","Data":"f5e3d624a85011ec7e2c59e9ba06c08c5d17be8dd9c2e29db7d8cb487699bf72"} Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.108655 4688 generic.go:334] "Generic (PLEG): container finished" podID="e137d392-3d34-468e-8a68-ed64665b2200" containerID="4d16272065b827d7357ad6efad232e90ceff1ea628372978f864a8b0e7968a62" exitCode=0 Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.108724 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vjwr8" event={"ID":"e137d392-3d34-468e-8a68-ed64665b2200","Type":"ContainerDied","Data":"4d16272065b827d7357ad6efad232e90ceff1ea628372978f864a8b0e7968a62"} Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.479197 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.479243 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.501084 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.510173 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.652385 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.652443 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.720623 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.847594 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zbs4x"] Nov 25 12:34:24 crc kubenswrapper[4688]: I1125 12:34:24.848148 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" podUID="32b7e898-1175-4763-b18b-cf98c2ca0982" containerName="dnsmasq-dns" containerID="cri-o://41f775b9ff96d73f5aaaa03f8c2e09501b96fa29350d3912ff43c86136c3b654" gracePeriod=10 Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.124807 4688 generic.go:334] "Generic (PLEG): container finished" podID="32b7e898-1175-4763-b18b-cf98c2ca0982" containerID="41f775b9ff96d73f5aaaa03f8c2e09501b96fa29350d3912ff43c86136c3b654" exitCode=0 Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.125012 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" event={"ID":"32b7e898-1175-4763-b18b-cf98c2ca0982","Type":"ContainerDied","Data":"41f775b9ff96d73f5aaaa03f8c2e09501b96fa29350d3912ff43c86136c3b654"} Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.178389 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.494346 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.585135 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm796\" (UniqueName: \"kubernetes.io/projected/32b7e898-1175-4763-b18b-cf98c2ca0982-kube-api-access-sm796\") pod \"32b7e898-1175-4763-b18b-cf98c2ca0982\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.585197 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-nb\") pod \"32b7e898-1175-4763-b18b-cf98c2ca0982\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.585262 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-sb\") pod \"32b7e898-1175-4763-b18b-cf98c2ca0982\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.585311 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-swift-storage-0\") pod \"32b7e898-1175-4763-b18b-cf98c2ca0982\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.585363 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-svc\") pod \"32b7e898-1175-4763-b18b-cf98c2ca0982\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.585392 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-config\") pod \"32b7e898-1175-4763-b18b-cf98c2ca0982\" (UID: \"32b7e898-1175-4763-b18b-cf98c2ca0982\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.629324 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b7e898-1175-4763-b18b-cf98c2ca0982-kube-api-access-sm796" (OuterVolumeSpecName: "kube-api-access-sm796") pod "32b7e898-1175-4763-b18b-cf98c2ca0982" (UID: "32b7e898-1175-4763-b18b-cf98c2ca0982"). InnerVolumeSpecName "kube-api-access-sm796". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.689038 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm796\" (UniqueName: \"kubernetes.io/projected/32b7e898-1175-4763-b18b-cf98c2ca0982-kube-api-access-sm796\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.739892 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.740419 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.765002 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-config" (OuterVolumeSpecName: "config") pod "32b7e898-1175-4763-b18b-cf98c2ca0982" (UID: "32b7e898-1175-4763-b18b-cf98c2ca0982"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.765355 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32b7e898-1175-4763-b18b-cf98c2ca0982" (UID: "32b7e898-1175-4763-b18b-cf98c2ca0982"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.768498 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "32b7e898-1175-4763-b18b-cf98c2ca0982" (UID: "32b7e898-1175-4763-b18b-cf98c2ca0982"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.769012 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32b7e898-1175-4763-b18b-cf98c2ca0982" (UID: "32b7e898-1175-4763-b18b-cf98c2ca0982"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.769286 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.773221 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32b7e898-1175-4763-b18b-cf98c2ca0982" (UID: "32b7e898-1175-4763-b18b-cf98c2ca0982"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.791321 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.791585 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.791705 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.791789 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.791870 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b7e898-1175-4763-b18b-cf98c2ca0982-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.795406 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.893316 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-scripts\") pod \"2f89f99a-e930-449e-8508-0c15309f5b8b\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.893419 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-scripts\") pod \"e137d392-3d34-468e-8a68-ed64665b2200\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.893549 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-combined-ca-bundle\") pod \"2f89f99a-e930-449e-8508-0c15309f5b8b\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.893605 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-config-data\") pod \"e137d392-3d34-468e-8a68-ed64665b2200\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.893656 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-config-data\") pod \"2f89f99a-e930-449e-8508-0c15309f5b8b\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.893681 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-combined-ca-bundle\") pod \"e137d392-3d34-468e-8a68-ed64665b2200\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.893742 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zll7p\" (UniqueName: \"kubernetes.io/projected/e137d392-3d34-468e-8a68-ed64665b2200-kube-api-access-zll7p\") pod \"e137d392-3d34-468e-8a68-ed64665b2200\" (UID: \"e137d392-3d34-468e-8a68-ed64665b2200\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.893780 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfvkm\" (UniqueName: \"kubernetes.io/projected/2f89f99a-e930-449e-8508-0c15309f5b8b-kube-api-access-mfvkm\") pod \"2f89f99a-e930-449e-8508-0c15309f5b8b\" (UID: \"2f89f99a-e930-449e-8508-0c15309f5b8b\") " Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.899766 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-scripts" (OuterVolumeSpecName: "scripts") pod "2f89f99a-e930-449e-8508-0c15309f5b8b" (UID: "2f89f99a-e930-449e-8508-0c15309f5b8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.900591 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f89f99a-e930-449e-8508-0c15309f5b8b-kube-api-access-mfvkm" (OuterVolumeSpecName: "kube-api-access-mfvkm") pod "2f89f99a-e930-449e-8508-0c15309f5b8b" (UID: "2f89f99a-e930-449e-8508-0c15309f5b8b"). InnerVolumeSpecName "kube-api-access-mfvkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.900674 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e137d392-3d34-468e-8a68-ed64665b2200-kube-api-access-zll7p" (OuterVolumeSpecName: "kube-api-access-zll7p") pod "e137d392-3d34-468e-8a68-ed64665b2200" (UID: "e137d392-3d34-468e-8a68-ed64665b2200"). InnerVolumeSpecName "kube-api-access-zll7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.906848 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-scripts" (OuterVolumeSpecName: "scripts") pod "e137d392-3d34-468e-8a68-ed64665b2200" (UID: "e137d392-3d34-468e-8a68-ed64665b2200"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.925236 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f89f99a-e930-449e-8508-0c15309f5b8b" (UID: "2f89f99a-e930-449e-8508-0c15309f5b8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.932139 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-config-data" (OuterVolumeSpecName: "config-data") pod "2f89f99a-e930-449e-8508-0c15309f5b8b" (UID: "2f89f99a-e930-449e-8508-0c15309f5b8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.941439 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-config-data" (OuterVolumeSpecName: "config-data") pod "e137d392-3d34-468e-8a68-ed64665b2200" (UID: "e137d392-3d34-468e-8a68-ed64665b2200"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.943760 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e137d392-3d34-468e-8a68-ed64665b2200" (UID: "e137d392-3d34-468e-8a68-ed64665b2200"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.996414 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.996460 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.996474 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zll7p\" (UniqueName: \"kubernetes.io/projected/e137d392-3d34-468e-8a68-ed64665b2200-kube-api-access-zll7p\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.996486 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfvkm\" (UniqueName: \"kubernetes.io/projected/2f89f99a-e930-449e-8508-0c15309f5b8b-kube-api-access-mfvkm\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.996497 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.996508 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.996535 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f89f99a-e930-449e-8508-0c15309f5b8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:25 crc kubenswrapper[4688]: I1125 12:34:25.996547 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e137d392-3d34-468e-8a68-ed64665b2200-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.134881 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" event={"ID":"32b7e898-1175-4763-b18b-cf98c2ca0982","Type":"ContainerDied","Data":"047287faa3ba3f15103ca3816944975cd4007291ea14347f3ee0a1d65b7803aa"} Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.134928 4688 scope.go:117] "RemoveContainer" containerID="41f775b9ff96d73f5aaaa03f8c2e09501b96fa29350d3912ff43c86136c3b654" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.134949 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zbs4x" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.137182 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" event={"ID":"2f89f99a-e930-449e-8508-0c15309f5b8b","Type":"ContainerDied","Data":"89d17e67e3a4a6ab93bc3b4da73a5d250ebf7fc4bea0cbba80b98858c8f79949"} Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.137566 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d17e67e3a4a6ab93bc3b4da73a5d250ebf7fc4bea0cbba80b98858c8f79949" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.137659 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hjtmq" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.155495 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vjwr8" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.157922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vjwr8" event={"ID":"e137d392-3d34-468e-8a68-ed64665b2200","Type":"ContainerDied","Data":"228f6bae2ff7e04c9c1314c5bc76a469739cf109dbb2b662dcdc615a36ec384f"} Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.157965 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="228f6bae2ff7e04c9c1314c5bc76a469739cf109dbb2b662dcdc615a36ec384f" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.172328 4688 scope.go:117] "RemoveContainer" containerID="9d39dc6cb53aa456c012d57aaaa6596957e6730f1569174cc9cfe207b53423e3" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.209591 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zbs4x"] Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.269001 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zbs4x"] Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.294629 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 12:34:26 crc kubenswrapper[4688]: E1125 12:34:26.295147 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b7e898-1175-4763-b18b-cf98c2ca0982" containerName="dnsmasq-dns" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.295178 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b7e898-1175-4763-b18b-cf98c2ca0982" containerName="dnsmasq-dns" Nov 25 12:34:26 crc kubenswrapper[4688]: E1125 12:34:26.295205 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f89f99a-e930-449e-8508-0c15309f5b8b" containerName="nova-cell1-conductor-db-sync" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.295216 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f89f99a-e930-449e-8508-0c15309f5b8b" containerName="nova-cell1-conductor-db-sync" Nov 25 12:34:26 crc kubenswrapper[4688]: E1125 12:34:26.295229 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e137d392-3d34-468e-8a68-ed64665b2200" containerName="nova-manage" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.295237 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e137d392-3d34-468e-8a68-ed64665b2200" containerName="nova-manage" Nov 25 12:34:26 crc kubenswrapper[4688]: E1125 12:34:26.295261 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b7e898-1175-4763-b18b-cf98c2ca0982" containerName="init" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.295268 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b7e898-1175-4763-b18b-cf98c2ca0982" containerName="init" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.295475 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b7e898-1175-4763-b18b-cf98c2ca0982" containerName="dnsmasq-dns" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.295500 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e137d392-3d34-468e-8a68-ed64665b2200" containerName="nova-manage" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.295538 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f89f99a-e930-449e-8508-0c15309f5b8b" containerName="nova-cell1-conductor-db-sync" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.296315 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.313294 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.334013 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.419665 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twbjz\" (UniqueName: \"kubernetes.io/projected/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-kube-api-access-twbjz\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.419792 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.419873 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.471890 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.472138 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-log" containerID="cri-o://186f76a5c036dc3717e5d35eb54fd7b632592c21aeb309c35cc11e5214b4f312" gracePeriod=30 Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.472283 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-api" containerID="cri-o://e56db0d14c303dbafc2a77b44d0607ebdb3e7a4a8d16655728a87c1b94cb9cc1" gracePeriod=30 Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.478003 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.478752 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.519983 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.521869 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.522006 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.522157 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twbjz\" (UniqueName: \"kubernetes.io/projected/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-kube-api-access-twbjz\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.538351 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.542190 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.545478 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twbjz\" (UniqueName: \"kubernetes.io/projected/92f1ea60-7d39-4b4f-911e-7dfdffffe38b-kube-api-access-twbjz\") pod \"nova-cell1-conductor-0\" (UID: \"92f1ea60-7d39-4b4f-911e-7dfdffffe38b\") " pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.620175 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.636078 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:26 crc kubenswrapper[4688]: I1125 12:34:26.762108 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b7e898-1175-4763-b18b-cf98c2ca0982" path="/var/lib/kubelet/pods/32b7e898-1175-4763-b18b-cf98c2ca0982/volumes" Nov 25 12:34:27 crc kubenswrapper[4688]: I1125 12:34:27.165796 4688 generic.go:334] "Generic (PLEG): container finished" podID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerID="186f76a5c036dc3717e5d35eb54fd7b632592c21aeb309c35cc11e5214b4f312" exitCode=143 Nov 25 12:34:27 crc kubenswrapper[4688]: I1125 12:34:27.165904 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d9300e0-f4d8-43cf-9595-ede2e45afede","Type":"ContainerDied","Data":"186f76a5c036dc3717e5d35eb54fd7b632592c21aeb309c35cc11e5214b4f312"} Nov 25 12:34:27 crc kubenswrapper[4688]: I1125 12:34:27.166198 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="63f35cfc-4bce-46a8-a1d3-29fa9317eae1" containerName="nova-scheduler-scheduler" containerID="cri-o://a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f" gracePeriod=30 Nov 25 12:34:27 crc kubenswrapper[4688]: I1125 12:34:27.171076 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 12:34:27 crc kubenswrapper[4688]: W1125 12:34:27.175135 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92f1ea60_7d39_4b4f_911e_7dfdffffe38b.slice/crio-15d3cce7f28bc7d3cd52c9656cc8d99b9525fb21722a81a748fa1c314aed712d WatchSource:0}: Error finding container 15d3cce7f28bc7d3cd52c9656cc8d99b9525fb21722a81a748fa1c314aed712d: Status 404 returned error can't find the container with id 15d3cce7f28bc7d3cd52c9656cc8d99b9525fb21722a81a748fa1c314aed712d Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.177432 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"92f1ea60-7d39-4b4f-911e-7dfdffffe38b","Type":"ContainerStarted","Data":"4c86a29665271aa9b3c8e174468f962aafb96dc9f083515cffc45b47c9e9f05b"} Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.177767 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"92f1ea60-7d39-4b4f-911e-7dfdffffe38b","Type":"ContainerStarted","Data":"15d3cce7f28bc7d3cd52c9656cc8d99b9525fb21722a81a748fa1c314aed712d"} Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.177715 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" containerName="nova-metadata-log" containerID="cri-o://58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1" gracePeriod=30 Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.177684 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" containerName="nova-metadata-metadata" containerID="cri-o://64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b" gracePeriod=30 Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.197274 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.197257251 podStartE2EDuration="2.197257251s" podCreationTimestamp="2025-11-25 12:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:28.194107266 +0000 UTC m=+1218.303736144" watchObservedRunningTime="2025-11-25 12:34:28.197257251 +0000 UTC m=+1218.306886119" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.764539 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.866345 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-combined-ca-bundle\") pod \"81556b99-5028-4a35-b14d-337f450cb8f4\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.866458 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-config-data\") pod \"81556b99-5028-4a35-b14d-337f450cb8f4\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.866497 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81556b99-5028-4a35-b14d-337f450cb8f4-logs\") pod \"81556b99-5028-4a35-b14d-337f450cb8f4\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.866575 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9h5p\" (UniqueName: \"kubernetes.io/projected/81556b99-5028-4a35-b14d-337f450cb8f4-kube-api-access-m9h5p\") pod \"81556b99-5028-4a35-b14d-337f450cb8f4\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.866617 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-nova-metadata-tls-certs\") pod \"81556b99-5028-4a35-b14d-337f450cb8f4\" (UID: \"81556b99-5028-4a35-b14d-337f450cb8f4\") " Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.867049 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81556b99-5028-4a35-b14d-337f450cb8f4-logs" (OuterVolumeSpecName: "logs") pod "81556b99-5028-4a35-b14d-337f450cb8f4" (UID: "81556b99-5028-4a35-b14d-337f450cb8f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.867615 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81556b99-5028-4a35-b14d-337f450cb8f4-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.884884 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81556b99-5028-4a35-b14d-337f450cb8f4-kube-api-access-m9h5p" (OuterVolumeSpecName: "kube-api-access-m9h5p") pod "81556b99-5028-4a35-b14d-337f450cb8f4" (UID: "81556b99-5028-4a35-b14d-337f450cb8f4"). InnerVolumeSpecName "kube-api-access-m9h5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.895571 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81556b99-5028-4a35-b14d-337f450cb8f4" (UID: "81556b99-5028-4a35-b14d-337f450cb8f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.902833 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-config-data" (OuterVolumeSpecName: "config-data") pod "81556b99-5028-4a35-b14d-337f450cb8f4" (UID: "81556b99-5028-4a35-b14d-337f450cb8f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.930305 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "81556b99-5028-4a35-b14d-337f450cb8f4" (UID: "81556b99-5028-4a35-b14d-337f450cb8f4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.969404 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.969464 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.969474 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9h5p\" (UniqueName: \"kubernetes.io/projected/81556b99-5028-4a35-b14d-337f450cb8f4-kube-api-access-m9h5p\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:28 crc kubenswrapper[4688]: I1125 12:34:28.969487 4688 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81556b99-5028-4a35-b14d-337f450cb8f4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.187238 4688 generic.go:334] "Generic (PLEG): container finished" podID="81556b99-5028-4a35-b14d-337f450cb8f4" containerID="64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b" exitCode=0 Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.187281 4688 generic.go:334] "Generic (PLEG): container finished" podID="81556b99-5028-4a35-b14d-337f450cb8f4" containerID="58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1" exitCode=143 Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.187286 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.187322 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81556b99-5028-4a35-b14d-337f450cb8f4","Type":"ContainerDied","Data":"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b"} Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.187365 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81556b99-5028-4a35-b14d-337f450cb8f4","Type":"ContainerDied","Data":"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1"} Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.187378 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81556b99-5028-4a35-b14d-337f450cb8f4","Type":"ContainerDied","Data":"e36501375c4e3bd4572a1aa9367ed6ec92f547701a1463158ce8d1f36637dcfd"} Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.187396 4688 scope.go:117] "RemoveContainer" containerID="64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.188179 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.210303 4688 scope.go:117] "RemoveContainer" containerID="58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.232642 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.249111 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.253231 4688 scope.go:117] "RemoveContainer" containerID="64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b" Nov 25 12:34:29 crc kubenswrapper[4688]: E1125 12:34:29.260843 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b\": container with ID starting with 64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b not found: ID does not exist" containerID="64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.260922 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b"} err="failed to get container status \"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b\": rpc error: code = NotFound desc = could not find container \"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b\": container with ID starting with 64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b not found: ID does not exist" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.260968 4688 scope.go:117] "RemoveContainer" containerID="58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.261122 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:29 crc kubenswrapper[4688]: E1125 12:34:29.261890 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" containerName="nova-metadata-metadata" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.261922 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" containerName="nova-metadata-metadata" Nov 25 12:34:29 crc kubenswrapper[4688]: E1125 12:34:29.261982 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" containerName="nova-metadata-log" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.261993 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" containerName="nova-metadata-log" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.262266 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" containerName="nova-metadata-log" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.262301 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" containerName="nova-metadata-metadata" Nov 25 12:34:29 crc kubenswrapper[4688]: E1125 12:34:29.262815 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1\": container with ID starting with 58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1 not found: ID does not exist" containerID="58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.262874 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1"} err="failed to get container status \"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1\": rpc error: code = NotFound desc = could not find container \"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1\": container with ID starting with 58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1 not found: ID does not exist" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.262911 4688 scope.go:117] "RemoveContainer" containerID="64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.263227 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b"} err="failed to get container status \"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b\": rpc error: code = NotFound desc = could not find container \"64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b\": container with ID starting with 64d39d34882ec469cf2b7a412b801cfacb3bdd3e8d73bacc71bc46368d82f13b not found: ID does not exist" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.263252 4688 scope.go:117] "RemoveContainer" containerID="58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.264174 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1"} err="failed to get container status \"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1\": rpc error: code = NotFound desc = could not find container \"58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1\": container with ID starting with 58ec72f4ddb867537328cee21e8df3a23c08e900ef8b4c5f3b5affe265ead9c1 not found: ID does not exist" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.264195 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.268372 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.269777 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.289062 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.379070 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-config-data\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.379142 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.379378 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t6tn\" (UniqueName: \"kubernetes.io/projected/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-kube-api-access-7t6tn\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.379559 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-logs\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.379686 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: E1125 12:34:29.481114 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.481602 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t6tn\" (UniqueName: \"kubernetes.io/projected/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-kube-api-access-7t6tn\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.481777 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-logs\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.481906 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.482011 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-config-data\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.482068 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.482392 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-logs\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: E1125 12:34:29.483000 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 12:34:29 crc kubenswrapper[4688]: E1125 12:34:29.484407 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 12:34:29 crc kubenswrapper[4688]: E1125 12:34:29.484558 4688 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="63f35cfc-4bce-46a8-a1d3-29fa9317eae1" containerName="nova-scheduler-scheduler" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.486193 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.487008 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.498954 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-config-data\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.501573 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t6tn\" (UniqueName: \"kubernetes.io/projected/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-kube-api-access-7t6tn\") pod \"nova-metadata-0\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " pod="openstack/nova-metadata-0" Nov 25 12:34:29 crc kubenswrapper[4688]: I1125 12:34:29.639695 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:34:30 crc kubenswrapper[4688]: I1125 12:34:30.139170 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:34:30 crc kubenswrapper[4688]: W1125 12:34:30.146298 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0d796e9_1552_4c93_94d0_cd0ac2cf8aca.slice/crio-32cd859b1ca07202e4abd33baba1c0fcb9a15460745d2c0b92a0600738d71219 WatchSource:0}: Error finding container 32cd859b1ca07202e4abd33baba1c0fcb9a15460745d2c0b92a0600738d71219: Status 404 returned error can't find the container with id 32cd859b1ca07202e4abd33baba1c0fcb9a15460745d2c0b92a0600738d71219 Nov 25 12:34:30 crc kubenswrapper[4688]: I1125 12:34:30.208617 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca","Type":"ContainerStarted","Data":"32cd859b1ca07202e4abd33baba1c0fcb9a15460745d2c0b92a0600738d71219"} Nov 25 12:34:30 crc kubenswrapper[4688]: I1125 12:34:30.749592 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81556b99-5028-4a35-b14d-337f450cb8f4" path="/var/lib/kubelet/pods/81556b99-5028-4a35-b14d-337f450cb8f4/volumes" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.194886 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.219110 4688 generic.go:334] "Generic (PLEG): container finished" podID="63f35cfc-4bce-46a8-a1d3-29fa9317eae1" containerID="a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f" exitCode=0 Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.219157 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"63f35cfc-4bce-46a8-a1d3-29fa9317eae1","Type":"ContainerDied","Data":"a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f"} Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.219211 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"63f35cfc-4bce-46a8-a1d3-29fa9317eae1","Type":"ContainerDied","Data":"d2736e40cc2c163254f09109bd8c7b285ae839eab73a23eafea736356ebe8164"} Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.219235 4688 scope.go:117] "RemoveContainer" containerID="a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.219173 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.225140 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca","Type":"ContainerStarted","Data":"8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd"} Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.225196 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca","Type":"ContainerStarted","Data":"d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57"} Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.243873 4688 scope.go:117] "RemoveContainer" containerID="a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f" Nov 25 12:34:31 crc kubenswrapper[4688]: E1125 12:34:31.245210 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f\": container with ID starting with a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f not found: ID does not exist" containerID="a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.245248 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f"} err="failed to get container status \"a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f\": rpc error: code = NotFound desc = could not find container \"a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f\": container with ID starting with a5dc7f97bb8a041970743885fce6a5b7aec2b5f3563427629b93d79702873e8f not found: ID does not exist" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.259509 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.259487209 podStartE2EDuration="2.259487209s" podCreationTimestamp="2025-11-25 12:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:31.248232769 +0000 UTC m=+1221.357861667" watchObservedRunningTime="2025-11-25 12:34:31.259487209 +0000 UTC m=+1221.369116087" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.318309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-combined-ca-bundle\") pod \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.318356 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-config-data\") pod \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.318430 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs75f\" (UniqueName: \"kubernetes.io/projected/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-kube-api-access-zs75f\") pod \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\" (UID: \"63f35cfc-4bce-46a8-a1d3-29fa9317eae1\") " Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.323428 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-kube-api-access-zs75f" (OuterVolumeSpecName: "kube-api-access-zs75f") pod "63f35cfc-4bce-46a8-a1d3-29fa9317eae1" (UID: "63f35cfc-4bce-46a8-a1d3-29fa9317eae1"). InnerVolumeSpecName "kube-api-access-zs75f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.347669 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-config-data" (OuterVolumeSpecName: "config-data") pod "63f35cfc-4bce-46a8-a1d3-29fa9317eae1" (UID: "63f35cfc-4bce-46a8-a1d3-29fa9317eae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.349132 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63f35cfc-4bce-46a8-a1d3-29fa9317eae1" (UID: "63f35cfc-4bce-46a8-a1d3-29fa9317eae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.421343 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.421698 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.421712 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs75f\" (UniqueName: \"kubernetes.io/projected/63f35cfc-4bce-46a8-a1d3-29fa9317eae1-kube-api-access-zs75f\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.553586 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.567405 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.580728 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:31 crc kubenswrapper[4688]: E1125 12:34:31.581169 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f35cfc-4bce-46a8-a1d3-29fa9317eae1" containerName="nova-scheduler-scheduler" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.581190 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f35cfc-4bce-46a8-a1d3-29fa9317eae1" containerName="nova-scheduler-scheduler" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.581429 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f35cfc-4bce-46a8-a1d3-29fa9317eae1" containerName="nova-scheduler-scheduler" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.582125 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.584182 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.589909 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.726601 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.726800 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-config-data\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.726859 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlfpc\" (UniqueName: \"kubernetes.io/projected/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-kube-api-access-zlfpc\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.829770 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.831397 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-config-data\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.831952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlfpc\" (UniqueName: \"kubernetes.io/projected/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-kube-api-access-zlfpc\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.832501 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.834581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-config-data\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.848324 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlfpc\" (UniqueName: \"kubernetes.io/projected/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-kube-api-access-zlfpc\") pod \"nova-scheduler-0\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " pod="openstack/nova-scheduler-0" Nov 25 12:34:31 crc kubenswrapper[4688]: I1125 12:34:31.902408 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.237376 4688 generic.go:334] "Generic (PLEG): container finished" podID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerID="e56db0d14c303dbafc2a77b44d0607ebdb3e7a4a8d16655728a87c1b94cb9cc1" exitCode=0 Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.237711 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d9300e0-f4d8-43cf-9595-ede2e45afede","Type":"ContainerDied","Data":"e56db0d14c303dbafc2a77b44d0607ebdb3e7a4a8d16655728a87c1b94cb9cc1"} Nov 25 12:34:32 crc kubenswrapper[4688]: W1125 12:34:32.426487 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc53b6538_362c_4e79_ab5f_578cfe6b1ab9.slice/crio-55771b4d6cf2ebd04a186cf154cc2200451310a9ada9c5b9b10d07ddf732c028 WatchSource:0}: Error finding container 55771b4d6cf2ebd04a186cf154cc2200451310a9ada9c5b9b10d07ddf732c028: Status 404 returned error can't find the container with id 55771b4d6cf2ebd04a186cf154cc2200451310a9ada9c5b9b10d07ddf732c028 Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.427830 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.588165 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.755808 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsj7d\" (UniqueName: \"kubernetes.io/projected/3d9300e0-f4d8-43cf-9595-ede2e45afede-kube-api-access-bsj7d\") pod \"3d9300e0-f4d8-43cf-9595-ede2e45afede\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.755896 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-combined-ca-bundle\") pod \"3d9300e0-f4d8-43cf-9595-ede2e45afede\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.755989 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-config-data\") pod \"3d9300e0-f4d8-43cf-9595-ede2e45afede\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.756050 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d9300e0-f4d8-43cf-9595-ede2e45afede-logs\") pod \"3d9300e0-f4d8-43cf-9595-ede2e45afede\" (UID: \"3d9300e0-f4d8-43cf-9595-ede2e45afede\") " Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.764197 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9300e0-f4d8-43cf-9595-ede2e45afede-logs" (OuterVolumeSpecName: "logs") pod "3d9300e0-f4d8-43cf-9595-ede2e45afede" (UID: "3d9300e0-f4d8-43cf-9595-ede2e45afede"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.772823 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9300e0-f4d8-43cf-9595-ede2e45afede-kube-api-access-bsj7d" (OuterVolumeSpecName: "kube-api-access-bsj7d") pod "3d9300e0-f4d8-43cf-9595-ede2e45afede" (UID: "3d9300e0-f4d8-43cf-9595-ede2e45afede"). InnerVolumeSpecName "kube-api-access-bsj7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.775182 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsj7d\" (UniqueName: \"kubernetes.io/projected/3d9300e0-f4d8-43cf-9595-ede2e45afede-kube-api-access-bsj7d\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.775221 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d9300e0-f4d8-43cf-9595-ede2e45afede-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.787002 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f35cfc-4bce-46a8-a1d3-29fa9317eae1" path="/var/lib/kubelet/pods/63f35cfc-4bce-46a8-a1d3-29fa9317eae1/volumes" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.794029 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d9300e0-f4d8-43cf-9595-ede2e45afede" (UID: "3d9300e0-f4d8-43cf-9595-ede2e45afede"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.796757 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-config-data" (OuterVolumeSpecName: "config-data") pod "3d9300e0-f4d8-43cf-9595-ede2e45afede" (UID: "3d9300e0-f4d8-43cf-9595-ede2e45afede"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.876546 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:32 crc kubenswrapper[4688]: I1125 12:34:32.876572 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9300e0-f4d8-43cf-9595-ede2e45afede-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.251993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d9300e0-f4d8-43cf-9595-ede2e45afede","Type":"ContainerDied","Data":"040e90b744fe07ea005d903b6824c7e7c72cf6d1ef3500ac2728abe69f90e908"} Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.252105 4688 scope.go:117] "RemoveContainer" containerID="e56db0d14c303dbafc2a77b44d0607ebdb3e7a4a8d16655728a87c1b94cb9cc1" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.252122 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.254131 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c53b6538-362c-4e79-ab5f-578cfe6b1ab9","Type":"ContainerStarted","Data":"2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986"} Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.254191 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c53b6538-362c-4e79-ab5f-578cfe6b1ab9","Type":"ContainerStarted","Data":"55771b4d6cf2ebd04a186cf154cc2200451310a9ada9c5b9b10d07ddf732c028"} Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.278324 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.278300316 podStartE2EDuration="2.278300316s" podCreationTimestamp="2025-11-25 12:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:33.271805833 +0000 UTC m=+1223.381434701" watchObservedRunningTime="2025-11-25 12:34:33.278300316 +0000 UTC m=+1223.387929194" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.279001 4688 scope.go:117] "RemoveContainer" containerID="186f76a5c036dc3717e5d35eb54fd7b632592c21aeb309c35cc11e5214b4f312" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.317625 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.324036 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.332655 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:33 crc kubenswrapper[4688]: E1125 12:34:33.333121 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-log" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.333146 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-log" Nov 25 12:34:33 crc kubenswrapper[4688]: E1125 12:34:33.333184 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-api" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.333193 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-api" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.333491 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-log" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.333556 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" containerName="nova-api-api" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.334766 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.336628 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.352575 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.492027 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-logs\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.492113 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.492155 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-config-data\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.493233 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrz66\" (UniqueName: \"kubernetes.io/projected/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-kube-api-access-hrz66\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.595864 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-logs\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.595942 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.595985 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-config-data\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.596136 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrz66\" (UniqueName: \"kubernetes.io/projected/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-kube-api-access-hrz66\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.596616 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-logs\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.603824 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.604037 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-config-data\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.627955 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrz66\" (UniqueName: \"kubernetes.io/projected/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-kube-api-access-hrz66\") pod \"nova-api-0\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " pod="openstack/nova-api-0" Nov 25 12:34:33 crc kubenswrapper[4688]: I1125 12:34:33.664070 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:34:34 crc kubenswrapper[4688]: I1125 12:34:34.106312 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:34 crc kubenswrapper[4688]: W1125 12:34:34.110397 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae5a6fb9_0ece_4eb9_885d_da1cadd6feeb.slice/crio-64bf32efae2d3e5e38a366a8c98c43893f9a3f22547475b65fcf275ebe1a0247 WatchSource:0}: Error finding container 64bf32efae2d3e5e38a366a8c98c43893f9a3f22547475b65fcf275ebe1a0247: Status 404 returned error can't find the container with id 64bf32efae2d3e5e38a366a8c98c43893f9a3f22547475b65fcf275ebe1a0247 Nov 25 12:34:34 crc kubenswrapper[4688]: I1125 12:34:34.262095 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb","Type":"ContainerStarted","Data":"64bf32efae2d3e5e38a366a8c98c43893f9a3f22547475b65fcf275ebe1a0247"} Nov 25 12:34:34 crc kubenswrapper[4688]: I1125 12:34:34.639769 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 12:34:34 crc kubenswrapper[4688]: I1125 12:34:34.640087 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 12:34:34 crc kubenswrapper[4688]: I1125 12:34:34.760472 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9300e0-f4d8-43cf-9595-ede2e45afede" path="/var/lib/kubelet/pods/3d9300e0-f4d8-43cf-9595-ede2e45afede/volumes" Nov 25 12:34:35 crc kubenswrapper[4688]: I1125 12:34:35.273348 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb","Type":"ContainerStarted","Data":"b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a"} Nov 25 12:34:35 crc kubenswrapper[4688]: I1125 12:34:35.273762 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb","Type":"ContainerStarted","Data":"c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1"} Nov 25 12:34:35 crc kubenswrapper[4688]: I1125 12:34:35.296629 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.296612559 podStartE2EDuration="2.296612559s" podCreationTimestamp="2025-11-25 12:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:35.295816688 +0000 UTC m=+1225.405445556" watchObservedRunningTime="2025-11-25 12:34:35.296612559 +0000 UTC m=+1225.406241427" Nov 25 12:34:36 crc kubenswrapper[4688]: I1125 12:34:36.664154 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 12:34:36 crc kubenswrapper[4688]: I1125 12:34:36.902851 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 12:34:38 crc kubenswrapper[4688]: I1125 12:34:38.160671 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 12:34:39 crc kubenswrapper[4688]: I1125 12:34:39.640118 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 12:34:39 crc kubenswrapper[4688]: I1125 12:34:39.640441 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 12:34:40 crc kubenswrapper[4688]: I1125 12:34:40.657749 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 12:34:40 crc kubenswrapper[4688]: I1125 12:34:40.657740 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 12:34:41 crc kubenswrapper[4688]: I1125 12:34:41.903066 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 12:34:41 crc kubenswrapper[4688]: I1125 12:34:41.929221 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 12:34:41 crc kubenswrapper[4688]: I1125 12:34:41.991898 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:34:41 crc kubenswrapper[4688]: I1125 12:34:41.992141 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="91c71377-dafd-4693-9408-7c0ec206490e" containerName="kube-state-metrics" containerID="cri-o://98147f8c378f701ff4e9b00d742f4d8838d4e977d586d1429ea156e6450e1895" gracePeriod=30 Nov 25 12:34:42 crc kubenswrapper[4688]: I1125 12:34:42.338656 4688 generic.go:334] "Generic (PLEG): container finished" podID="91c71377-dafd-4693-9408-7c0ec206490e" containerID="98147f8c378f701ff4e9b00d742f4d8838d4e977d586d1429ea156e6450e1895" exitCode=2 Nov 25 12:34:42 crc kubenswrapper[4688]: I1125 12:34:42.338726 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"91c71377-dafd-4693-9408-7c0ec206490e","Type":"ContainerDied","Data":"98147f8c378f701ff4e9b00d742f4d8838d4e977d586d1429ea156e6450e1895"} Nov 25 12:34:42 crc kubenswrapper[4688]: I1125 12:34:42.406921 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 12:34:42 crc kubenswrapper[4688]: I1125 12:34:42.533953 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 12:34:42 crc kubenswrapper[4688]: I1125 12:34:42.656084 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zf89\" (UniqueName: \"kubernetes.io/projected/91c71377-dafd-4693-9408-7c0ec206490e-kube-api-access-9zf89\") pod \"91c71377-dafd-4693-9408-7c0ec206490e\" (UID: \"91c71377-dafd-4693-9408-7c0ec206490e\") " Nov 25 12:34:42 crc kubenswrapper[4688]: I1125 12:34:42.690937 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c71377-dafd-4693-9408-7c0ec206490e-kube-api-access-9zf89" (OuterVolumeSpecName: "kube-api-access-9zf89") pod "91c71377-dafd-4693-9408-7c0ec206490e" (UID: "91c71377-dafd-4693-9408-7c0ec206490e"). InnerVolumeSpecName "kube-api-access-9zf89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:42 crc kubenswrapper[4688]: I1125 12:34:42.758539 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zf89\" (UniqueName: \"kubernetes.io/projected/91c71377-dafd-4693-9408-7c0ec206490e-kube-api-access-9zf89\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.348729 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"91c71377-dafd-4693-9408-7c0ec206490e","Type":"ContainerDied","Data":"7f40b20b839c6bc8874b0f5657b2987cd1127e12c2f8a9b68e838af94409857c"} Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.348789 4688 scope.go:117] "RemoveContainer" containerID="98147f8c378f701ff4e9b00d742f4d8838d4e977d586d1429ea156e6450e1895" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.348869 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.379989 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.400568 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.415405 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:34:43 crc kubenswrapper[4688]: E1125 12:34:43.415949 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c71377-dafd-4693-9408-7c0ec206490e" containerName="kube-state-metrics" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.415969 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c71377-dafd-4693-9408-7c0ec206490e" containerName="kube-state-metrics" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.416149 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c71377-dafd-4693-9408-7c0ec206490e" containerName="kube-state-metrics" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.416831 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.420293 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.420486 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.425077 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.574428 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.574832 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.574925 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bf7d\" (UniqueName: \"kubernetes.io/projected/95c2802c-7143-4d63-8959-434c04453333-kube-api-access-4bf7d\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.575171 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.665253 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.665632 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.676732 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.676809 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bf7d\" (UniqueName: \"kubernetes.io/projected/95c2802c-7143-4d63-8959-434c04453333-kube-api-access-4bf7d\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.676975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.677098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.682454 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.685254 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.692224 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/95c2802c-7143-4d63-8959-434c04453333-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.695507 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bf7d\" (UniqueName: \"kubernetes.io/projected/95c2802c-7143-4d63-8959-434c04453333-kube-api-access-4bf7d\") pod \"kube-state-metrics-0\" (UID: \"95c2802c-7143-4d63-8959-434c04453333\") " pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.737782 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.966529 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.966816 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="ceilometer-central-agent" containerID="cri-o://02938b14718d0dfb16c296cdcc7ee4bc30389cd98b85174bad137942495a5a0d" gracePeriod=30 Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.966953 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="proxy-httpd" containerID="cri-o://013ffdba8f62b5db73f21320b636748281a50ac1cd90d3983788b481787a4a7c" gracePeriod=30 Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.967001 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="sg-core" containerID="cri-o://b11bd8d39f2bb9bcc16a1abe8f36c68bdf52abc29c128751a7fab19498c8e7b9" gracePeriod=30 Nov 25 12:34:43 crc kubenswrapper[4688]: I1125 12:34:43.967032 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="ceilometer-notification-agent" containerID="cri-o://94570ff0dabc8a83fc39e32624a577ae73249c58ab9fe85d2e71b87c06a0cc41" gracePeriod=30 Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.177070 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.362023 4688 generic.go:334] "Generic (PLEG): container finished" podID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerID="013ffdba8f62b5db73f21320b636748281a50ac1cd90d3983788b481787a4a7c" exitCode=0 Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.362063 4688 generic.go:334] "Generic (PLEG): container finished" podID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerID="b11bd8d39f2bb9bcc16a1abe8f36c68bdf52abc29c128751a7fab19498c8e7b9" exitCode=2 Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.362105 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerDied","Data":"013ffdba8f62b5db73f21320b636748281a50ac1cd90d3983788b481787a4a7c"} Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.362164 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerDied","Data":"b11bd8d39f2bb9bcc16a1abe8f36c68bdf52abc29c128751a7fab19498c8e7b9"} Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.363785 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95c2802c-7143-4d63-8959-434c04453333","Type":"ContainerStarted","Data":"956a9b06927b89bbec4b038045824813e131199f73389c1f5c367cb6facd036a"} Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.707756 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.707805 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 12:34:44 crc kubenswrapper[4688]: I1125 12:34:44.755223 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c71377-dafd-4693-9408-7c0ec206490e" path="/var/lib/kubelet/pods/91c71377-dafd-4693-9408-7c0ec206490e/volumes" Nov 25 12:34:45 crc kubenswrapper[4688]: I1125 12:34:45.375718 4688 generic.go:334] "Generic (PLEG): container finished" podID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerID="02938b14718d0dfb16c296cdcc7ee4bc30389cd98b85174bad137942495a5a0d" exitCode=0 Nov 25 12:34:45 crc kubenswrapper[4688]: I1125 12:34:45.375797 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerDied","Data":"02938b14718d0dfb16c296cdcc7ee4bc30389cd98b85174bad137942495a5a0d"} Nov 25 12:34:45 crc kubenswrapper[4688]: I1125 12:34:45.377501 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95c2802c-7143-4d63-8959-434c04453333","Type":"ContainerStarted","Data":"5c55a39d1d40ac2d31eb50e5d09324dfefde6c4db0d952bd95388f17e9cedca9"} Nov 25 12:34:45 crc kubenswrapper[4688]: I1125 12:34:45.377674 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 12:34:45 crc kubenswrapper[4688]: I1125 12:34:45.397822 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.0489204069999998 podStartE2EDuration="2.397799004s" podCreationTimestamp="2025-11-25 12:34:43 +0000 UTC" firstStartedPulling="2025-11-25 12:34:44.181095295 +0000 UTC m=+1234.290724163" lastFinishedPulling="2025-11-25 12:34:44.529973872 +0000 UTC m=+1234.639602760" observedRunningTime="2025-11-25 12:34:45.391344722 +0000 UTC m=+1235.500973590" watchObservedRunningTime="2025-11-25 12:34:45.397799004 +0000 UTC m=+1235.507427872" Nov 25 12:34:47 crc kubenswrapper[4688]: I1125 12:34:47.854277 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:34:47 crc kubenswrapper[4688]: I1125 12:34:47.854668 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:34:47 crc kubenswrapper[4688]: I1125 12:34:47.854720 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:34:47 crc kubenswrapper[4688]: I1125 12:34:47.855511 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"606e3c186faf0a77643eeee31f20e3a41380a34fa45ab23f8be805001dd713d2"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:34:47 crc kubenswrapper[4688]: I1125 12:34:47.855597 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://606e3c186faf0a77643eeee31f20e3a41380a34fa45ab23f8be805001dd713d2" gracePeriod=600 Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.411624 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="606e3c186faf0a77643eeee31f20e3a41380a34fa45ab23f8be805001dd713d2" exitCode=0 Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.411776 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"606e3c186faf0a77643eeee31f20e3a41380a34fa45ab23f8be805001dd713d2"} Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.412258 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"adac398a94564aa341b35f325f1f99096f13126fe72668e940408e0ca6a84914"} Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.412283 4688 scope.go:117] "RemoveContainer" containerID="bd4c77f22f04f95d12c0a6e31890a8c2be94485d18b032708ee7f7a088bd619a" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.426459 4688 generic.go:334] "Generic (PLEG): container finished" podID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerID="94570ff0dabc8a83fc39e32624a577ae73249c58ab9fe85d2e71b87c06a0cc41" exitCode=0 Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.426516 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerDied","Data":"94570ff0dabc8a83fc39e32624a577ae73249c58ab9fe85d2e71b87c06a0cc41"} Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.550782 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.667433 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-config-data\") pod \"135ed56b-7e5c-41e7-a254-ab35c678cb20\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.667929 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-combined-ca-bundle\") pod \"135ed56b-7e5c-41e7-a254-ab35c678cb20\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.668064 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-log-httpd\") pod \"135ed56b-7e5c-41e7-a254-ab35c678cb20\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.668103 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-run-httpd\") pod \"135ed56b-7e5c-41e7-a254-ab35c678cb20\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.668139 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/135ed56b-7e5c-41e7-a254-ab35c678cb20-kube-api-access-fjqm2\") pod \"135ed56b-7e5c-41e7-a254-ab35c678cb20\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.668177 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-scripts\") pod \"135ed56b-7e5c-41e7-a254-ab35c678cb20\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.668211 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-sg-core-conf-yaml\") pod \"135ed56b-7e5c-41e7-a254-ab35c678cb20\" (UID: \"135ed56b-7e5c-41e7-a254-ab35c678cb20\") " Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.668709 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "135ed56b-7e5c-41e7-a254-ab35c678cb20" (UID: "135ed56b-7e5c-41e7-a254-ab35c678cb20"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.668749 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "135ed56b-7e5c-41e7-a254-ab35c678cb20" (UID: "135ed56b-7e5c-41e7-a254-ab35c678cb20"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.669043 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.669062 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135ed56b-7e5c-41e7-a254-ab35c678cb20-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.674129 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-scripts" (OuterVolumeSpecName: "scripts") pod "135ed56b-7e5c-41e7-a254-ab35c678cb20" (UID: "135ed56b-7e5c-41e7-a254-ab35c678cb20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.674394 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135ed56b-7e5c-41e7-a254-ab35c678cb20-kube-api-access-fjqm2" (OuterVolumeSpecName: "kube-api-access-fjqm2") pod "135ed56b-7e5c-41e7-a254-ab35c678cb20" (UID: "135ed56b-7e5c-41e7-a254-ab35c678cb20"). InnerVolumeSpecName "kube-api-access-fjqm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.704145 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "135ed56b-7e5c-41e7-a254-ab35c678cb20" (UID: "135ed56b-7e5c-41e7-a254-ab35c678cb20"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.748115 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "135ed56b-7e5c-41e7-a254-ab35c678cb20" (UID: "135ed56b-7e5c-41e7-a254-ab35c678cb20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.774882 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjqm2\" (UniqueName: \"kubernetes.io/projected/135ed56b-7e5c-41e7-a254-ab35c678cb20-kube-api-access-fjqm2\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.774923 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.774936 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.774950 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.901764 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-config-data" (OuterVolumeSpecName: "config-data") pod "135ed56b-7e5c-41e7-a254-ab35c678cb20" (UID: "135ed56b-7e5c-41e7-a254-ab35c678cb20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:48 crc kubenswrapper[4688]: I1125 12:34:48.978050 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135ed56b-7e5c-41e7-a254-ab35c678cb20-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.437829 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.438148 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135ed56b-7e5c-41e7-a254-ab35c678cb20","Type":"ContainerDied","Data":"7cfad5f4f1c321598b57f82b45b0dab6adeb408d4be7ac21eab0a0e8125c1eb7"} Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.438204 4688 scope.go:117] "RemoveContainer" containerID="013ffdba8f62b5db73f21320b636748281a50ac1cd90d3983788b481787a4a7c" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.465998 4688 scope.go:117] "RemoveContainer" containerID="b11bd8d39f2bb9bcc16a1abe8f36c68bdf52abc29c128751a7fab19498c8e7b9" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.478099 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.483833 4688 scope.go:117] "RemoveContainer" containerID="94570ff0dabc8a83fc39e32624a577ae73249c58ab9fe85d2e71b87c06a0cc41" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.490377 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.504263 4688 scope.go:117] "RemoveContainer" containerID="02938b14718d0dfb16c296cdcc7ee4bc30389cd98b85174bad137942495a5a0d" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.507044 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:49 crc kubenswrapper[4688]: E1125 12:34:49.507495 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="ceilometer-notification-agent" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.507518 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="ceilometer-notification-agent" Nov 25 12:34:49 crc kubenswrapper[4688]: E1125 12:34:49.509676 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="ceilometer-central-agent" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.509688 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="ceilometer-central-agent" Nov 25 12:34:49 crc kubenswrapper[4688]: E1125 12:34:49.509713 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="sg-core" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.509720 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="sg-core" Nov 25 12:34:49 crc kubenswrapper[4688]: E1125 12:34:49.509745 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="proxy-httpd" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.509752 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="proxy-httpd" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.510015 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="ceilometer-central-agent" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.510040 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="sg-core" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.510053 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="proxy-httpd" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.510064 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" containerName="ceilometer-notification-agent" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.525723 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.525864 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.531375 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.531597 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.537449 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.646548 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.660843 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.670501 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.690855 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.691046 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.691137 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-log-httpd\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.691168 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-config-data\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.691217 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-scripts\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.691285 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.691317 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cf92\" (UniqueName: \"kubernetes.io/projected/c1008e55-9b13-47ef-bf1f-123d3898293c-kube-api-access-9cf92\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.691359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-run-httpd\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.793151 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.793239 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-log-httpd\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.793267 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-config-data\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.793308 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-scripts\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.793359 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.793384 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cf92\" (UniqueName: \"kubernetes.io/projected/c1008e55-9b13-47ef-bf1f-123d3898293c-kube-api-access-9cf92\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.793413 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-run-httpd\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.793454 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.794226 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-log-httpd\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.794255 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-run-httpd\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.798495 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.799063 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-scripts\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.810630 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.810818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.811085 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-config-data\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.822079 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cf92\" (UniqueName: \"kubernetes.io/projected/c1008e55-9b13-47ef-bf1f-123d3898293c-kube-api-access-9cf92\") pod \"ceilometer-0\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " pod="openstack/ceilometer-0" Nov 25 12:34:49 crc kubenswrapper[4688]: I1125 12:34:49.851987 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:50 crc kubenswrapper[4688]: W1125 12:34:50.391700 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1008e55_9b13_47ef_bf1f_123d3898293c.slice/crio-debd72025ac0d95f00f82937548b615e7c7f7fafb473870a668f4ec702f05ffa WatchSource:0}: Error finding container debd72025ac0d95f00f82937548b615e7c7f7fafb473870a668f4ec702f05ffa: Status 404 returned error can't find the container with id debd72025ac0d95f00f82937548b615e7c7f7fafb473870a668f4ec702f05ffa Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.406433 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.474986 4688 generic.go:334] "Generic (PLEG): container finished" podID="c27defb6-83ca-455c-a198-6fc77fe0f901" containerID="75cf2ccc4f8b654add849ae66a3331317f5b0939f81571bde99647f759ec8b17" exitCode=137 Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.475077 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c27defb6-83ca-455c-a198-6fc77fe0f901","Type":"ContainerDied","Data":"75cf2ccc4f8b654add849ae66a3331317f5b0939f81571bde99647f759ec8b17"} Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.475452 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c27defb6-83ca-455c-a198-6fc77fe0f901","Type":"ContainerDied","Data":"d5b4b6ae2e933b1a7ad453c431c0590902fcccdf63df22d7f2c22dd90be781a4"} Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.475469 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b4b6ae2e933b1a7ad453c431c0590902fcccdf63df22d7f2c22dd90be781a4" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.480141 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerStarted","Data":"debd72025ac0d95f00f82937548b615e7c7f7fafb473870a668f4ec702f05ffa"} Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.485082 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.532373 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.713125 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m6xh\" (UniqueName: \"kubernetes.io/projected/c27defb6-83ca-455c-a198-6fc77fe0f901-kube-api-access-4m6xh\") pod \"c27defb6-83ca-455c-a198-6fc77fe0f901\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.713245 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-config-data\") pod \"c27defb6-83ca-455c-a198-6fc77fe0f901\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.713287 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-combined-ca-bundle\") pod \"c27defb6-83ca-455c-a198-6fc77fe0f901\" (UID: \"c27defb6-83ca-455c-a198-6fc77fe0f901\") " Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.718286 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27defb6-83ca-455c-a198-6fc77fe0f901-kube-api-access-4m6xh" (OuterVolumeSpecName: "kube-api-access-4m6xh") pod "c27defb6-83ca-455c-a198-6fc77fe0f901" (UID: "c27defb6-83ca-455c-a198-6fc77fe0f901"). InnerVolumeSpecName "kube-api-access-4m6xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.744889 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-config-data" (OuterVolumeSpecName: "config-data") pod "c27defb6-83ca-455c-a198-6fc77fe0f901" (UID: "c27defb6-83ca-455c-a198-6fc77fe0f901"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.748398 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c27defb6-83ca-455c-a198-6fc77fe0f901" (UID: "c27defb6-83ca-455c-a198-6fc77fe0f901"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.760720 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135ed56b-7e5c-41e7-a254-ab35c678cb20" path="/var/lib/kubelet/pods/135ed56b-7e5c-41e7-a254-ab35c678cb20/volumes" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.814983 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m6xh\" (UniqueName: \"kubernetes.io/projected/c27defb6-83ca-455c-a198-6fc77fe0f901-kube-api-access-4m6xh\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.815319 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:50 crc kubenswrapper[4688]: I1125 12:34:50.815330 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27defb6-83ca-455c-a198-6fc77fe0f901-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.493381 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.494188 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerStarted","Data":"c3a4fa8684a6585caee4713b6f3095cccad0ddc481daa842435a78f631df57d8"} Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.548799 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.559642 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.570940 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:51 crc kubenswrapper[4688]: E1125 12:34:51.571980 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27defb6-83ca-455c-a198-6fc77fe0f901" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.572019 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27defb6-83ca-455c-a198-6fc77fe0f901" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.572335 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27defb6-83ca-455c-a198-6fc77fe0f901" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.573199 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.575400 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.576228 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.576268 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.582504 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.738657 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.738729 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.738817 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6xc\" (UniqueName: \"kubernetes.io/projected/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-kube-api-access-zn6xc\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.738851 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.738899 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.841057 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.841123 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.841204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn6xc\" (UniqueName: \"kubernetes.io/projected/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-kube-api-access-zn6xc\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.841228 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.841281 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.848806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.849140 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.852379 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.853152 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.859803 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn6xc\" (UniqueName: \"kubernetes.io/projected/db662a26-0b85-4b43-9dcd-8b21fd64c3e9-kube-api-access-zn6xc\") pod \"nova-cell1-novncproxy-0\" (UID: \"db662a26-0b85-4b43-9dcd-8b21fd64c3e9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:51 crc kubenswrapper[4688]: I1125 12:34:51.894694 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:52 crc kubenswrapper[4688]: I1125 12:34:52.418422 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 12:34:52 crc kubenswrapper[4688]: W1125 12:34:52.426771 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb662a26_0b85_4b43_9dcd_8b21fd64c3e9.slice/crio-2ff2eaeda4b969e6845f7e1195e98b183de599ed5a837a68036269c66937953f WatchSource:0}: Error finding container 2ff2eaeda4b969e6845f7e1195e98b183de599ed5a837a68036269c66937953f: Status 404 returned error can't find the container with id 2ff2eaeda4b969e6845f7e1195e98b183de599ed5a837a68036269c66937953f Nov 25 12:34:52 crc kubenswrapper[4688]: I1125 12:34:52.512945 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerStarted","Data":"7a28a81eb14a80bd06ca932504028a9da13c040e78939c773e2d625fcfc91030"} Nov 25 12:34:52 crc kubenswrapper[4688]: I1125 12:34:52.512993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerStarted","Data":"81e77fc97a71271cac39a79e5b6f03229f3499e0b22e48282577d13723486f4b"} Nov 25 12:34:52 crc kubenswrapper[4688]: I1125 12:34:52.515567 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"db662a26-0b85-4b43-9dcd-8b21fd64c3e9","Type":"ContainerStarted","Data":"2ff2eaeda4b969e6845f7e1195e98b183de599ed5a837a68036269c66937953f"} Nov 25 12:34:52 crc kubenswrapper[4688]: I1125 12:34:52.756396 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27defb6-83ca-455c-a198-6fc77fe0f901" path="/var/lib/kubelet/pods/c27defb6-83ca-455c-a198-6fc77fe0f901/volumes" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.532543 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"db662a26-0b85-4b43-9dcd-8b21fd64c3e9","Type":"ContainerStarted","Data":"1435c1c2c5e3d45b299c186fbe6d3f7ea4a97537bf266dba77ec5a95a14ab7c4"} Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.675814 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.676202 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.676752 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.676797 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.682216 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.682359 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.705206 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.70516148 podStartE2EDuration="2.70516148s" podCreationTimestamp="2025-11-25 12:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:53.567959067 +0000 UTC m=+1243.677587935" watchObservedRunningTime="2025-11-25 12:34:53.70516148 +0000 UTC m=+1243.814790358" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.812890 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.893989 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6"] Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.896036 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:53 crc kubenswrapper[4688]: I1125 12:34:53.916060 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6"] Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.085684 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.085726 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.085805 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.085918 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-config\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.085977 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55r4l\" (UniqueName: \"kubernetes.io/projected/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-kube-api-access-55r4l\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.086003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.188942 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-config\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.189084 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55r4l\" (UniqueName: \"kubernetes.io/projected/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-kube-api-access-55r4l\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.189119 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.189202 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.189235 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.189329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.190746 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.190858 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.191373 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.191549 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-config\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.192186 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.216705 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55r4l\" (UniqueName: \"kubernetes.io/projected/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-kube-api-access-55r4l\") pod \"dnsmasq-dns-6b7bbf7cf9-xfkn6\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.228577 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.555802 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerStarted","Data":"53112e5e4739e92fdc7deafca70c0e0a6d9f81c85b1b08a6a1028aa68c64924d"} Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.556316 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.591031 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.626918716 podStartE2EDuration="5.591009264s" podCreationTimestamp="2025-11-25 12:34:49 +0000 UTC" firstStartedPulling="2025-11-25 12:34:50.394792605 +0000 UTC m=+1240.504421473" lastFinishedPulling="2025-11-25 12:34:53.358883153 +0000 UTC m=+1243.468512021" observedRunningTime="2025-11-25 12:34:54.578998714 +0000 UTC m=+1244.688627582" watchObservedRunningTime="2025-11-25 12:34:54.591009264 +0000 UTC m=+1244.700638132" Nov 25 12:34:54 crc kubenswrapper[4688]: I1125 12:34:54.820893 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6"] Nov 25 12:34:55 crc kubenswrapper[4688]: I1125 12:34:55.506121 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:55 crc kubenswrapper[4688]: I1125 12:34:55.564901 4688 generic.go:334] "Generic (PLEG): container finished" podID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" containerID="5ec360c1700eed3a816c1a9c8377f3b7882b747b39fe7a63f61ca462df2ac216" exitCode=0 Nov 25 12:34:55 crc kubenswrapper[4688]: I1125 12:34:55.564955 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" event={"ID":"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04","Type":"ContainerDied","Data":"5ec360c1700eed3a816c1a9c8377f3b7882b747b39fe7a63f61ca462df2ac216"} Nov 25 12:34:55 crc kubenswrapper[4688]: I1125 12:34:55.564992 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" event={"ID":"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04","Type":"ContainerStarted","Data":"ec07be15a87118d7f42751c6a2dcbcef1467f92fe359c0d1f48e381d9d3e2ecb"} Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.431306 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.599487 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" event={"ID":"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04","Type":"ContainerStarted","Data":"692fd6b6f590e930938cc16fa00a4d7b50a8be0f4cda34b2d1625f6cef947382"} Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.599585 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-log" containerID="cri-o://c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1" gracePeriod=30 Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.599826 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="ceilometer-central-agent" containerID="cri-o://c3a4fa8684a6585caee4713b6f3095cccad0ddc481daa842435a78f631df57d8" gracePeriod=30 Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.599898 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-api" containerID="cri-o://b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a" gracePeriod=30 Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.600202 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="proxy-httpd" containerID="cri-o://53112e5e4739e92fdc7deafca70c0e0a6d9f81c85b1b08a6a1028aa68c64924d" gracePeriod=30 Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.600244 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="sg-core" containerID="cri-o://7a28a81eb14a80bd06ca932504028a9da13c040e78939c773e2d625fcfc91030" gracePeriod=30 Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.600276 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="ceilometer-notification-agent" containerID="cri-o://81e77fc97a71271cac39a79e5b6f03229f3499e0b22e48282577d13723486f4b" gracePeriod=30 Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.629738 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" podStartSLOduration=3.629721532 podStartE2EDuration="3.629721532s" podCreationTimestamp="2025-11-25 12:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:34:56.627059282 +0000 UTC m=+1246.736688150" watchObservedRunningTime="2025-11-25 12:34:56.629721532 +0000 UTC m=+1246.739350400" Nov 25 12:34:56 crc kubenswrapper[4688]: I1125 12:34:56.895000 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.624396 4688 generic.go:334] "Generic (PLEG): container finished" podID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerID="c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1" exitCode=143 Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.624478 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb","Type":"ContainerDied","Data":"c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1"} Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.628190 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerID="53112e5e4739e92fdc7deafca70c0e0a6d9f81c85b1b08a6a1028aa68c64924d" exitCode=0 Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.628219 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerID="7a28a81eb14a80bd06ca932504028a9da13c040e78939c773e2d625fcfc91030" exitCode=2 Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.628232 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerID="81e77fc97a71271cac39a79e5b6f03229f3499e0b22e48282577d13723486f4b" exitCode=0 Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.628241 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerID="c3a4fa8684a6585caee4713b6f3095cccad0ddc481daa842435a78f631df57d8" exitCode=0 Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.629262 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerDied","Data":"53112e5e4739e92fdc7deafca70c0e0a6d9f81c85b1b08a6a1028aa68c64924d"} Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.629303 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.629321 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerDied","Data":"7a28a81eb14a80bd06ca932504028a9da13c040e78939c773e2d625fcfc91030"} Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.629332 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerDied","Data":"81e77fc97a71271cac39a79e5b6f03229f3499e0b22e48282577d13723486f4b"} Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.629344 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerDied","Data":"c3a4fa8684a6585caee4713b6f3095cccad0ddc481daa842435a78f631df57d8"} Nov 25 12:34:57 crc kubenswrapper[4688]: I1125 12:34:57.901391 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.067503 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-run-httpd\") pod \"c1008e55-9b13-47ef-bf1f-123d3898293c\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068009 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-sg-core-conf-yaml\") pod \"c1008e55-9b13-47ef-bf1f-123d3898293c\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068035 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-ceilometer-tls-certs\") pod \"c1008e55-9b13-47ef-bf1f-123d3898293c\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068073 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-combined-ca-bundle\") pod \"c1008e55-9b13-47ef-bf1f-123d3898293c\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068105 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1008e55-9b13-47ef-bf1f-123d3898293c" (UID: "c1008e55-9b13-47ef-bf1f-123d3898293c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068124 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-log-httpd\") pod \"c1008e55-9b13-47ef-bf1f-123d3898293c\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068205 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-scripts\") pod \"c1008e55-9b13-47ef-bf1f-123d3898293c\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068350 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cf92\" (UniqueName: \"kubernetes.io/projected/c1008e55-9b13-47ef-bf1f-123d3898293c-kube-api-access-9cf92\") pod \"c1008e55-9b13-47ef-bf1f-123d3898293c\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068444 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-config-data\") pod \"c1008e55-9b13-47ef-bf1f-123d3898293c\" (UID: \"c1008e55-9b13-47ef-bf1f-123d3898293c\") " Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.068599 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1008e55-9b13-47ef-bf1f-123d3898293c" (UID: "c1008e55-9b13-47ef-bf1f-123d3898293c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.069380 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.069427 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1008e55-9b13-47ef-bf1f-123d3898293c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.079788 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1008e55-9b13-47ef-bf1f-123d3898293c-kube-api-access-9cf92" (OuterVolumeSpecName: "kube-api-access-9cf92") pod "c1008e55-9b13-47ef-bf1f-123d3898293c" (UID: "c1008e55-9b13-47ef-bf1f-123d3898293c"). InnerVolumeSpecName "kube-api-access-9cf92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.080834 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-scripts" (OuterVolumeSpecName: "scripts") pod "c1008e55-9b13-47ef-bf1f-123d3898293c" (UID: "c1008e55-9b13-47ef-bf1f-123d3898293c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.104317 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1008e55-9b13-47ef-bf1f-123d3898293c" (UID: "c1008e55-9b13-47ef-bf1f-123d3898293c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.142651 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c1008e55-9b13-47ef-bf1f-123d3898293c" (UID: "c1008e55-9b13-47ef-bf1f-123d3898293c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.171281 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.171314 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.171324 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.171333 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cf92\" (UniqueName: \"kubernetes.io/projected/c1008e55-9b13-47ef-bf1f-123d3898293c-kube-api-access-9cf92\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.193012 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1008e55-9b13-47ef-bf1f-123d3898293c" (UID: "c1008e55-9b13-47ef-bf1f-123d3898293c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.195743 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-config-data" (OuterVolumeSpecName: "config-data") pod "c1008e55-9b13-47ef-bf1f-123d3898293c" (UID: "c1008e55-9b13-47ef-bf1f-123d3898293c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.272746 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.272777 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1008e55-9b13-47ef-bf1f-123d3898293c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.642238 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1008e55-9b13-47ef-bf1f-123d3898293c","Type":"ContainerDied","Data":"debd72025ac0d95f00f82937548b615e7c7f7fafb473870a668f4ec702f05ffa"} Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.642307 4688 scope.go:117] "RemoveContainer" containerID="53112e5e4739e92fdc7deafca70c0e0a6d9f81c85b1b08a6a1028aa68c64924d" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.642305 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.673261 4688 scope.go:117] "RemoveContainer" containerID="7a28a81eb14a80bd06ca932504028a9da13c040e78939c773e2d625fcfc91030" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.690031 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.701822 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.707985 4688 scope.go:117] "RemoveContainer" containerID="81e77fc97a71271cac39a79e5b6f03229f3499e0b22e48282577d13723486f4b" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.713432 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:58 crc kubenswrapper[4688]: E1125 12:34:58.713948 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="ceilometer-notification-agent" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.713975 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="ceilometer-notification-agent" Nov 25 12:34:58 crc kubenswrapper[4688]: E1125 12:34:58.714005 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="ceilometer-central-agent" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.714014 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="ceilometer-central-agent" Nov 25 12:34:58 crc kubenswrapper[4688]: E1125 12:34:58.714029 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="sg-core" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.714037 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="sg-core" Nov 25 12:34:58 crc kubenswrapper[4688]: E1125 12:34:58.714051 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="proxy-httpd" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.714057 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="proxy-httpd" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.714244 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="sg-core" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.714261 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="proxy-httpd" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.714275 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="ceilometer-central-agent" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.714290 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" containerName="ceilometer-notification-agent" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.716196 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.718755 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.719403 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.719730 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.724319 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.742775 4688 scope.go:117] "RemoveContainer" containerID="c3a4fa8684a6585caee4713b6f3095cccad0ddc481daa842435a78f631df57d8" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.752434 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1008e55-9b13-47ef-bf1f-123d3898293c" path="/var/lib/kubelet/pods/c1008e55-9b13-47ef-bf1f-123d3898293c/volumes" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.883344 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.883696 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.883748 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-scripts\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.883809 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.883907 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-config-data\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.883928 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdkht\" (UniqueName: \"kubernetes.io/projected/22a16017-15b1-40ff-89f3-d1ae2620d6f4-kube-api-access-qdkht\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.883969 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-log-httpd\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.883988 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-run-httpd\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.985182 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-scripts\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.985992 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.986080 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-config-data\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.986144 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdkht\" (UniqueName: \"kubernetes.io/projected/22a16017-15b1-40ff-89f3-d1ae2620d6f4-kube-api-access-qdkht\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.986197 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-log-httpd\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.986219 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-run-httpd\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.986302 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.986360 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.987214 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-log-httpd\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.987241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-run-httpd\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.989982 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.990523 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-config-data\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.991178 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-scripts\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.991710 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:58 crc kubenswrapper[4688]: I1125 12:34:58.998710 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:59 crc kubenswrapper[4688]: I1125 12:34:59.004216 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdkht\" (UniqueName: \"kubernetes.io/projected/22a16017-15b1-40ff-89f3-d1ae2620d6f4-kube-api-access-qdkht\") pod \"ceilometer-0\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " pod="openstack/ceilometer-0" Nov 25 12:34:59 crc kubenswrapper[4688]: I1125 12:34:59.039437 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 12:34:59 crc kubenswrapper[4688]: I1125 12:34:59.495317 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 12:34:59 crc kubenswrapper[4688]: I1125 12:34:59.655868 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerStarted","Data":"a6b8ba93044548658e941d96e41c477c5ef7a52fa2f036548fe68fb375d71652"} Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.178794 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.319732 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-logs\") pod \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.320159 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-logs" (OuterVolumeSpecName: "logs") pod "ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" (UID: "ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.320260 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-combined-ca-bundle\") pod \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.320361 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-config-data\") pod \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.320441 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrz66\" (UniqueName: \"kubernetes.io/projected/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-kube-api-access-hrz66\") pod \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\" (UID: \"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb\") " Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.321994 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.324541 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-kube-api-access-hrz66" (OuterVolumeSpecName: "kube-api-access-hrz66") pod "ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" (UID: "ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb"). InnerVolumeSpecName "kube-api-access-hrz66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.356741 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-config-data" (OuterVolumeSpecName: "config-data") pod "ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" (UID: "ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.364232 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" (UID: "ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.423696 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.423724 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.423733 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrz66\" (UniqueName: \"kubernetes.io/projected/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb-kube-api-access-hrz66\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.668697 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerStarted","Data":"2a5b2c6fb017b90db99af962c030313a2325be2e77a490f556b03742e615c82b"} Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.673406 4688 generic.go:334] "Generic (PLEG): container finished" podID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerID="b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a" exitCode=0 Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.673482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb","Type":"ContainerDied","Data":"b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a"} Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.673553 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb","Type":"ContainerDied","Data":"64bf32efae2d3e5e38a366a8c98c43893f9a3f22547475b65fcf275ebe1a0247"} Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.677113 4688 scope.go:117] "RemoveContainer" containerID="b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.677337 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.709856 4688 scope.go:117] "RemoveContainer" containerID="c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.737693 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.765593 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.773168 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:00 crc kubenswrapper[4688]: E1125 12:35:00.773632 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-log" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.773654 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-log" Nov 25 12:35:00 crc kubenswrapper[4688]: E1125 12:35:00.773693 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-api" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.773701 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-api" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.773920 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-api" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.773943 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" containerName="nova-api-log" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.775170 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.781900 4688 scope.go:117] "RemoveContainer" containerID="b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.782185 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.783048 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.784411 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 12:35:00 crc kubenswrapper[4688]: E1125 12:35:00.784499 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a\": container with ID starting with b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a not found: ID does not exist" containerID="b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.784558 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a"} err="failed to get container status \"b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a\": rpc error: code = NotFound desc = could not find container \"b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a\": container with ID starting with b7ed0b1fb76d0fb3d3a4783e7fd451dcd96814ddd07781b520fec1c9bdf91d7a not found: ID does not exist" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.784598 4688 scope.go:117] "RemoveContainer" containerID="c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1" Nov 25 12:35:00 crc kubenswrapper[4688]: E1125 12:35:00.785099 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1\": container with ID starting with c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1 not found: ID does not exist" containerID="c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.785143 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1"} err="failed to get container status \"c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1\": rpc error: code = NotFound desc = could not find container \"c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1\": container with ID starting with c180538ce3b9357c8228a40497a84d339730a19eff586ec9f08552730fdf6df1 not found: ID does not exist" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.793396 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.842989 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-config-data\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.843078 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.843164 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.843210 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-logs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.843228 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvthg\" (UniqueName: \"kubernetes.io/projected/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-kube-api-access-wvthg\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.843255 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.944452 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-config-data\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.944617 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.944705 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.944721 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-logs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.944740 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvthg\" (UniqueName: \"kubernetes.io/projected/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-kube-api-access-wvthg\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.944769 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.946879 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-logs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.949002 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.950233 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.951730 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.952179 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-config-data\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:00 crc kubenswrapper[4688]: I1125 12:35:00.963226 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvthg\" (UniqueName: \"kubernetes.io/projected/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-kube-api-access-wvthg\") pod \"nova-api-0\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " pod="openstack/nova-api-0" Nov 25 12:35:01 crc kubenswrapper[4688]: I1125 12:35:01.103259 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:35:01 crc kubenswrapper[4688]: I1125 12:35:01.541220 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:01 crc kubenswrapper[4688]: W1125 12:35:01.544058 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b4c776a_7dbb_479f_a8e7_ad6575aacea2.slice/crio-520d738d4a23729b3be927f6e43a1a6689d2bdb447112d2b2b9955f144f6893e WatchSource:0}: Error finding container 520d738d4a23729b3be927f6e43a1a6689d2bdb447112d2b2b9955f144f6893e: Status 404 returned error can't find the container with id 520d738d4a23729b3be927f6e43a1a6689d2bdb447112d2b2b9955f144f6893e Nov 25 12:35:01 crc kubenswrapper[4688]: I1125 12:35:01.687814 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4c776a-7dbb-479f-a8e7-ad6575aacea2","Type":"ContainerStarted","Data":"520d738d4a23729b3be927f6e43a1a6689d2bdb447112d2b2b9955f144f6893e"} Nov 25 12:35:01 crc kubenswrapper[4688]: I1125 12:35:01.689878 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerStarted","Data":"30ecb35dcc075763385580b26358f737ba35793620b8275036ec25c1295e5f63"} Nov 25 12:35:01 crc kubenswrapper[4688]: I1125 12:35:01.896017 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:35:01 crc kubenswrapper[4688]: I1125 12:35:01.914085 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.708204 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerStarted","Data":"5adf4b165b15721c18c6595f30305bb4ae7d04310ef875e28f0c48f90d0a89ea"} Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.709942 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4c776a-7dbb-479f-a8e7-ad6575aacea2","Type":"ContainerStarted","Data":"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2"} Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.709975 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4c776a-7dbb-479f-a8e7-ad6575aacea2","Type":"ContainerStarted","Data":"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c"} Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.728046 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.729915 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7298913110000003 podStartE2EDuration="2.729891311s" podCreationTimestamp="2025-11-25 12:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:35:02.726815229 +0000 UTC m=+1252.836444087" watchObservedRunningTime="2025-11-25 12:35:02.729891311 +0000 UTC m=+1252.839520179" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.751036 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb" path="/var/lib/kubelet/pods/ae5a6fb9-0ece-4eb9-885d-da1cadd6feeb/volumes" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.888849 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-s8lfl"] Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.890519 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.899753 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.901031 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.902153 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-s8lfl"] Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.984767 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-scripts\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.984834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.984888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld7gb\" (UniqueName: \"kubernetes.io/projected/563d1d64-50e6-4460-8365-5ead3bc46347-kube-api-access-ld7gb\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:02 crc kubenswrapper[4688]: I1125 12:35:02.984927 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-config-data\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.086432 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.086524 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld7gb\" (UniqueName: \"kubernetes.io/projected/563d1d64-50e6-4460-8365-5ead3bc46347-kube-api-access-ld7gb\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.086585 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-config-data\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.086680 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-scripts\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.092079 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-scripts\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.092215 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.104598 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-config-data\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.106747 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld7gb\" (UniqueName: \"kubernetes.io/projected/563d1d64-50e6-4460-8365-5ead3bc46347-kube-api-access-ld7gb\") pod \"nova-cell1-cell-mapping-s8lfl\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.222247 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.712454 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-s8lfl"] Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.721410 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerStarted","Data":"3825f0b8c07447b613011ba2b8d4e7847c7052cad963261b4dba73f89723a13e"} Nov 25 12:35:03 crc kubenswrapper[4688]: I1125 12:35:03.753860 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9995998849999999 podStartE2EDuration="5.753840823s" podCreationTimestamp="2025-11-25 12:34:58 +0000 UTC" firstStartedPulling="2025-11-25 12:34:59.49832059 +0000 UTC m=+1249.607949458" lastFinishedPulling="2025-11-25 12:35:03.252561528 +0000 UTC m=+1253.362190396" observedRunningTime="2025-11-25 12:35:03.746442045 +0000 UTC m=+1253.856070923" watchObservedRunningTime="2025-11-25 12:35:03.753840823 +0000 UTC m=+1253.863469691" Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.230724 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.289105 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xthf7"] Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.289996 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" podUID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerName="dnsmasq-dns" containerID="cri-o://4e8f5286297dc6037bb794e8ce27c5de5abcddb55177009389463a32438bea08" gracePeriod=10 Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.747670 4688 generic.go:334] "Generic (PLEG): container finished" podID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerID="4e8f5286297dc6037bb794e8ce27c5de5abcddb55177009389463a32438bea08" exitCode=0 Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.774929 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.775351 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" event={"ID":"f448e93d-acc9-49aa-9404-bd886cd6a4f1","Type":"ContainerDied","Data":"4e8f5286297dc6037bb794e8ce27c5de5abcddb55177009389463a32438bea08"} Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.775380 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s8lfl" event={"ID":"563d1d64-50e6-4460-8365-5ead3bc46347","Type":"ContainerStarted","Data":"a0b7650af5ba7ffdbaed4131e6e1fe89976b4c18e3d4d0d4920edad38794a4da"} Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.775391 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s8lfl" event={"ID":"563d1d64-50e6-4460-8365-5ead3bc46347","Type":"ContainerStarted","Data":"cfaf030e7c045131a7e3a5d4aa69fdc14860c18cc962d79ea5c66ad84024c9cf"} Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.802957 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-s8lfl" podStartSLOduration=2.802935336 podStartE2EDuration="2.802935336s" podCreationTimestamp="2025-11-25 12:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:35:04.786191719 +0000 UTC m=+1254.895820587" watchObservedRunningTime="2025-11-25 12:35:04.802935336 +0000 UTC m=+1254.912564204" Nov 25 12:35:04 crc kubenswrapper[4688]: I1125 12:35:04.897382 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.055802 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-sb\") pod \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.056110 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-swift-storage-0\") pod \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.056215 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-nb\") pod \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.056412 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-config\") pod \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.056822 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-svc\") pod \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.056939 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2glwb\" (UniqueName: \"kubernetes.io/projected/f448e93d-acc9-49aa-9404-bd886cd6a4f1-kube-api-access-2glwb\") pod \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\" (UID: \"f448e93d-acc9-49aa-9404-bd886cd6a4f1\") " Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.081894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f448e93d-acc9-49aa-9404-bd886cd6a4f1-kube-api-access-2glwb" (OuterVolumeSpecName: "kube-api-access-2glwb") pod "f448e93d-acc9-49aa-9404-bd886cd6a4f1" (UID: "f448e93d-acc9-49aa-9404-bd886cd6a4f1"). InnerVolumeSpecName "kube-api-access-2glwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.108896 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-config" (OuterVolumeSpecName: "config") pod "f448e93d-acc9-49aa-9404-bd886cd6a4f1" (UID: "f448e93d-acc9-49aa-9404-bd886cd6a4f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.122043 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f448e93d-acc9-49aa-9404-bd886cd6a4f1" (UID: "f448e93d-acc9-49aa-9404-bd886cd6a4f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.124115 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f448e93d-acc9-49aa-9404-bd886cd6a4f1" (UID: "f448e93d-acc9-49aa-9404-bd886cd6a4f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.132064 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f448e93d-acc9-49aa-9404-bd886cd6a4f1" (UID: "f448e93d-acc9-49aa-9404-bd886cd6a4f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.138098 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f448e93d-acc9-49aa-9404-bd886cd6a4f1" (UID: "f448e93d-acc9-49aa-9404-bd886cd6a4f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.159096 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.159132 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.159142 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.159153 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.159161 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f448e93d-acc9-49aa-9404-bd886cd6a4f1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.159170 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2glwb\" (UniqueName: \"kubernetes.io/projected/f448e93d-acc9-49aa-9404-bd886cd6a4f1-kube-api-access-2glwb\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.778765 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.778810 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" event={"ID":"f448e93d-acc9-49aa-9404-bd886cd6a4f1","Type":"ContainerDied","Data":"451194fd789b16f6dec28fd23c19bc868813dc10bbdd28782abc1f8730be9dfe"} Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.778888 4688 scope.go:117] "RemoveContainer" containerID="4e8f5286297dc6037bb794e8ce27c5de5abcddb55177009389463a32438bea08" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.803818 4688 scope.go:117] "RemoveContainer" containerID="7d06d41e19637524ec089b74f42de5c7a41513eb83f281873bb80b907cf98f6d" Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.813253 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xthf7"] Nov 25 12:35:05 crc kubenswrapper[4688]: I1125 12:35:05.822150 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xthf7"] Nov 25 12:35:06 crc kubenswrapper[4688]: I1125 12:35:06.751490 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" path="/var/lib/kubelet/pods/f448e93d-acc9-49aa-9404-bd886cd6a4f1/volumes" Nov 25 12:35:08 crc kubenswrapper[4688]: I1125 12:35:08.809160 4688 generic.go:334] "Generic (PLEG): container finished" podID="563d1d64-50e6-4460-8365-5ead3bc46347" containerID="a0b7650af5ba7ffdbaed4131e6e1fe89976b4c18e3d4d0d4920edad38794a4da" exitCode=0 Nov 25 12:35:08 crc kubenswrapper[4688]: I1125 12:35:08.809244 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s8lfl" event={"ID":"563d1d64-50e6-4460-8365-5ead3bc46347","Type":"ContainerDied","Data":"a0b7650af5ba7ffdbaed4131e6e1fe89976b4c18e3d4d0d4920edad38794a4da"} Nov 25 12:35:09 crc kubenswrapper[4688]: I1125 12:35:09.721066 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9b86998b5-xthf7" podUID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: i/o timeout" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.159901 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.262267 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld7gb\" (UniqueName: \"kubernetes.io/projected/563d1d64-50e6-4460-8365-5ead3bc46347-kube-api-access-ld7gb\") pod \"563d1d64-50e6-4460-8365-5ead3bc46347\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.262412 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-config-data\") pod \"563d1d64-50e6-4460-8365-5ead3bc46347\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.262447 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-scripts\") pod \"563d1d64-50e6-4460-8365-5ead3bc46347\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.262697 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-combined-ca-bundle\") pod \"563d1d64-50e6-4460-8365-5ead3bc46347\" (UID: \"563d1d64-50e6-4460-8365-5ead3bc46347\") " Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.267329 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-scripts" (OuterVolumeSpecName: "scripts") pod "563d1d64-50e6-4460-8365-5ead3bc46347" (UID: "563d1d64-50e6-4460-8365-5ead3bc46347"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.267894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563d1d64-50e6-4460-8365-5ead3bc46347-kube-api-access-ld7gb" (OuterVolumeSpecName: "kube-api-access-ld7gb") pod "563d1d64-50e6-4460-8365-5ead3bc46347" (UID: "563d1d64-50e6-4460-8365-5ead3bc46347"). InnerVolumeSpecName "kube-api-access-ld7gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.288645 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-config-data" (OuterVolumeSpecName: "config-data") pod "563d1d64-50e6-4460-8365-5ead3bc46347" (UID: "563d1d64-50e6-4460-8365-5ead3bc46347"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.289949 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "563d1d64-50e6-4460-8365-5ead3bc46347" (UID: "563d1d64-50e6-4460-8365-5ead3bc46347"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.365626 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.365659 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld7gb\" (UniqueName: \"kubernetes.io/projected/563d1d64-50e6-4460-8365-5ead3bc46347-kube-api-access-ld7gb\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.365671 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.365679 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/563d1d64-50e6-4460-8365-5ead3bc46347-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.827511 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s8lfl" event={"ID":"563d1d64-50e6-4460-8365-5ead3bc46347","Type":"ContainerDied","Data":"cfaf030e7c045131a7e3a5d4aa69fdc14860c18cc962d79ea5c66ad84024c9cf"} Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.827577 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfaf030e7c045131a7e3a5d4aa69fdc14860c18cc962d79ea5c66ad84024c9cf" Nov 25 12:35:10 crc kubenswrapper[4688]: I1125 12:35:10.827589 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s8lfl" Nov 25 12:35:10 crc kubenswrapper[4688]: E1125 12:35:10.950729 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod563d1d64_50e6_4460_8365_5ead3bc46347.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod563d1d64_50e6_4460_8365_5ead3bc46347.slice/crio-cfaf030e7c045131a7e3a5d4aa69fdc14860c18cc962d79ea5c66ad84024c9cf\": RecentStats: unable to find data in memory cache]" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.005327 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.005643 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerName="nova-api-log" containerID="cri-o://deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c" gracePeriod=30 Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.005704 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerName="nova-api-api" containerID="cri-o://0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2" gracePeriod=30 Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.017698 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.017948 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c53b6538-362c-4e79-ab5f-578cfe6b1ab9" containerName="nova-scheduler-scheduler" containerID="cri-o://2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986" gracePeriod=30 Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.030937 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.031177 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-log" containerID="cri-o://d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57" gracePeriod=30 Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.031235 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-metadata" containerID="cri-o://8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd" gracePeriod=30 Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.571033 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.688033 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvthg\" (UniqueName: \"kubernetes.io/projected/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-kube-api-access-wvthg\") pod \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.688235 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-combined-ca-bundle\") pod \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.688259 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-config-data\") pod \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.688314 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-logs\") pod \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.688356 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-internal-tls-certs\") pod \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.688395 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-public-tls-certs\") pod \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\" (UID: \"2b4c776a-7dbb-479f-a8e7-ad6575aacea2\") " Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.689196 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-logs" (OuterVolumeSpecName: "logs") pod "2b4c776a-7dbb-479f-a8e7-ad6575aacea2" (UID: "2b4c776a-7dbb-479f-a8e7-ad6575aacea2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.694327 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-kube-api-access-wvthg" (OuterVolumeSpecName: "kube-api-access-wvthg") pod "2b4c776a-7dbb-479f-a8e7-ad6575aacea2" (UID: "2b4c776a-7dbb-479f-a8e7-ad6575aacea2"). InnerVolumeSpecName "kube-api-access-wvthg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.716792 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b4c776a-7dbb-479f-a8e7-ad6575aacea2" (UID: "2b4c776a-7dbb-479f-a8e7-ad6575aacea2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.719183 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-config-data" (OuterVolumeSpecName: "config-data") pod "2b4c776a-7dbb-479f-a8e7-ad6575aacea2" (UID: "2b4c776a-7dbb-479f-a8e7-ad6575aacea2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.736755 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2b4c776a-7dbb-479f-a8e7-ad6575aacea2" (UID: "2b4c776a-7dbb-479f-a8e7-ad6575aacea2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.763319 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2b4c776a-7dbb-479f-a8e7-ad6575aacea2" (UID: "2b4c776a-7dbb-479f-a8e7-ad6575aacea2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.790872 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.790910 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.790921 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.790929 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.790937 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.790945 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvthg\" (UniqueName: \"kubernetes.io/projected/2b4c776a-7dbb-479f-a8e7-ad6575aacea2-kube-api-access-wvthg\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.840581 4688 generic.go:334] "Generic (PLEG): container finished" podID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerID="d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57" exitCode=143 Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.840656 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca","Type":"ContainerDied","Data":"d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57"} Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.845483 4688 generic.go:334] "Generic (PLEG): container finished" podID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerID="0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2" exitCode=0 Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.845598 4688 generic.go:334] "Generic (PLEG): container finished" podID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerID="deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c" exitCode=143 Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.845625 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4c776a-7dbb-479f-a8e7-ad6575aacea2","Type":"ContainerDied","Data":"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2"} Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.845673 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4c776a-7dbb-479f-a8e7-ad6575aacea2","Type":"ContainerDied","Data":"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c"} Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.845689 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4c776a-7dbb-479f-a8e7-ad6575aacea2","Type":"ContainerDied","Data":"520d738d4a23729b3be927f6e43a1a6689d2bdb447112d2b2b9955f144f6893e"} Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.845709 4688 scope.go:117] "RemoveContainer" containerID="0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.845943 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.886966 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.889729 4688 scope.go:117] "RemoveContainer" containerID="deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.908664 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.912637 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.914171 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.919488 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.920019 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerName="nova-api-api" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920043 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerName="nova-api-api" Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.920069 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerName="nova-api-log" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920078 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerName="nova-api-log" Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.920094 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerName="init" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920102 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerName="init" Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.920123 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563d1d64-50e6-4460-8365-5ead3bc46347" containerName="nova-manage" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920130 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="563d1d64-50e6-4460-8365-5ead3bc46347" containerName="nova-manage" Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.920149 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerName="dnsmasq-dns" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920157 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerName="dnsmasq-dns" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920372 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="563d1d64-50e6-4460-8365-5ead3bc46347" containerName="nova-manage" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920395 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerName="nova-api-log" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920417 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" containerName="nova-api-api" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.920427 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f448e93d-acc9-49aa-9404-bd886cd6a4f1" containerName="dnsmasq-dns" Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.921841 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.921912 4688 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c53b6538-362c-4e79-ab5f-578cfe6b1ab9" containerName="nova-scheduler-scheduler" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.922122 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.928260 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.931694 4688 scope.go:117] "RemoveContainer" containerID="0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2" Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.935662 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2\": container with ID starting with 0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2 not found: ID does not exist" containerID="0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.935690 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2"} err="failed to get container status \"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2\": rpc error: code = NotFound desc = could not find container \"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2\": container with ID starting with 0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2 not found: ID does not exist" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.935714 4688 scope.go:117] "RemoveContainer" containerID="deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c" Nov 25 12:35:11 crc kubenswrapper[4688]: E1125 12:35:11.936145 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c\": container with ID starting with deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c not found: ID does not exist" containerID="deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.936166 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c"} err="failed to get container status \"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c\": rpc error: code = NotFound desc = could not find container \"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c\": container with ID starting with deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c not found: ID does not exist" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.936180 4688 scope.go:117] "RemoveContainer" containerID="0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.936428 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2"} err="failed to get container status \"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2\": rpc error: code = NotFound desc = could not find container \"0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2\": container with ID starting with 0e696be244b4efaddaadf31bf76024fa48163b0610f7d650d23952bc8dfca5b2 not found: ID does not exist" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.936443 4688 scope.go:117] "RemoveContainer" containerID="deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.966532 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c"} err="failed to get container status \"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c\": rpc error: code = NotFound desc = could not find container \"deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c\": container with ID starting with deb8c3af6449f18aca6987b1be4624c3d90175040b5cf3f4e152f7d1037c3d4c not found: ID does not exist" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.968878 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.969119 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 12:35:11 crc kubenswrapper[4688]: I1125 12:35:11.969235 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.100287 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-public-tls-certs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.100374 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-config-data\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.100453 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.100470 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.100566 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a46e21bc-c734-4c5d-a16a-27860cb65ab0-logs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.100873 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxk4\" (UniqueName: \"kubernetes.io/projected/a46e21bc-c734-4c5d-a16a-27860cb65ab0-kube-api-access-fzxk4\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.202467 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.202509 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.202578 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a46e21bc-c734-4c5d-a16a-27860cb65ab0-logs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.202636 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxk4\" (UniqueName: \"kubernetes.io/projected/a46e21bc-c734-4c5d-a16a-27860cb65ab0-kube-api-access-fzxk4\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.202690 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-public-tls-certs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.202716 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-config-data\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.203448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a46e21bc-c734-4c5d-a16a-27860cb65ab0-logs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.207818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-config-data\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.208317 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-public-tls-certs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.208487 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.209339 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a46e21bc-c734-4c5d-a16a-27860cb65ab0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.223901 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxk4\" (UniqueName: \"kubernetes.io/projected/a46e21bc-c734-4c5d-a16a-27860cb65ab0-kube-api-access-fzxk4\") pod \"nova-api-0\" (UID: \"a46e21bc-c734-4c5d-a16a-27860cb65ab0\") " pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.293125 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 12:35:12 crc kubenswrapper[4688]: W1125 12:35:12.745769 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda46e21bc_c734_4c5d_a16a_27860cb65ab0.slice/crio-dd765b9765f74fc7aed8bd6309744e548e0da59221df4de60d1985946fd92022 WatchSource:0}: Error finding container dd765b9765f74fc7aed8bd6309744e548e0da59221df4de60d1985946fd92022: Status 404 returned error can't find the container with id dd765b9765f74fc7aed8bd6309744e548e0da59221df4de60d1985946fd92022 Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.750509 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b4c776a-7dbb-479f-a8e7-ad6575aacea2" path="/var/lib/kubelet/pods/2b4c776a-7dbb-479f-a8e7-ad6575aacea2/volumes" Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.751294 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 12:35:12 crc kubenswrapper[4688]: I1125 12:35:12.859008 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a46e21bc-c734-4c5d-a16a-27860cb65ab0","Type":"ContainerStarted","Data":"dd765b9765f74fc7aed8bd6309744e548e0da59221df4de60d1985946fd92022"} Nov 25 12:35:13 crc kubenswrapper[4688]: I1125 12:35:13.872145 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a46e21bc-c734-4c5d-a16a-27860cb65ab0","Type":"ContainerStarted","Data":"778635efecff0bd2593c63dd93de0c4a008c1fa2fd9f0f88766fb9b44a9b03a4"} Nov 25 12:35:13 crc kubenswrapper[4688]: I1125 12:35:13.873106 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a46e21bc-c734-4c5d-a16a-27860cb65ab0","Type":"ContainerStarted","Data":"a8b2e1c24016097cc37694ef99dd07504da194e050d65f12e5e0d9cc9939ab65"} Nov 25 12:35:13 crc kubenswrapper[4688]: I1125 12:35:13.897621 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.897590074 podStartE2EDuration="2.897590074s" podCreationTimestamp="2025-11-25 12:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:35:13.894374968 +0000 UTC m=+1264.004003836" watchObservedRunningTime="2025-11-25 12:35:13.897590074 +0000 UTC m=+1264.007219012" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.801422 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.884340 4688 generic.go:334] "Generic (PLEG): container finished" podID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerID="8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd" exitCode=0 Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.885511 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca","Type":"ContainerDied","Data":"8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd"} Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.885547 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.885581 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca","Type":"ContainerDied","Data":"32cd859b1ca07202e4abd33baba1c0fcb9a15460745d2c0b92a0600738d71219"} Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.885601 4688 scope.go:117] "RemoveContainer" containerID="8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.913884 4688 scope.go:117] "RemoveContainer" containerID="d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.945222 4688 scope.go:117] "RemoveContainer" containerID="8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd" Nov 25 12:35:14 crc kubenswrapper[4688]: E1125 12:35:14.948147 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd\": container with ID starting with 8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd not found: ID does not exist" containerID="8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.948220 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd"} err="failed to get container status \"8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd\": rpc error: code = NotFound desc = could not find container \"8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd\": container with ID starting with 8ef30f33a9f822635b68ec3fb449c4f41addcc021381b91ba7e96e468fb72ebd not found: ID does not exist" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.948241 4688 scope.go:117] "RemoveContainer" containerID="d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57" Nov 25 12:35:14 crc kubenswrapper[4688]: E1125 12:35:14.948647 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57\": container with ID starting with d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57 not found: ID does not exist" containerID="d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.948666 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57"} err="failed to get container status \"d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57\": rpc error: code = NotFound desc = could not find container \"d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57\": container with ID starting with d32060ed4d2af16ec2d01301db63ff94611cdbbed901626df445b86a8b820d57 not found: ID does not exist" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.954308 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t6tn\" (UniqueName: \"kubernetes.io/projected/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-kube-api-access-7t6tn\") pod \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.954350 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-config-data\") pod \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.954434 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-combined-ca-bundle\") pod \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.954485 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-nova-metadata-tls-certs\") pod \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.954505 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-logs\") pod \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\" (UID: \"e0d796e9-1552-4c93-94d0-cd0ac2cf8aca\") " Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.955635 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-logs" (OuterVolumeSpecName: "logs") pod "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" (UID: "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:14 crc kubenswrapper[4688]: I1125 12:35:14.964018 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-kube-api-access-7t6tn" (OuterVolumeSpecName: "kube-api-access-7t6tn") pod "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" (UID: "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca"). InnerVolumeSpecName "kube-api-access-7t6tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.009448 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-config-data" (OuterVolumeSpecName: "config-data") pod "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" (UID: "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.011676 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" (UID: "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.040728 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" (UID: "e0d796e9-1552-4c93-94d0-cd0ac2cf8aca"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.057135 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.057172 4688 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.057186 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-logs\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.057197 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t6tn\" (UniqueName: \"kubernetes.io/projected/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-kube-api-access-7t6tn\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.057209 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.239670 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.249969 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.276644 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:35:15 crc kubenswrapper[4688]: E1125 12:35:15.277120 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-log" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.277139 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-log" Nov 25 12:35:15 crc kubenswrapper[4688]: E1125 12:35:15.277150 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-metadata" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.277157 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-metadata" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.277354 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-metadata" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.277383 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-log" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.278658 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.282228 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.282379 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.289713 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.366590 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.366967 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.367104 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac5d7790-506b-40b0-9721-6cff85ff053e-logs\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.367204 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gcmv\" (UniqueName: \"kubernetes.io/projected/ac5d7790-506b-40b0-9721-6cff85ff053e-kube-api-access-5gcmv\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.367363 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-config-data\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.469250 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.469648 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.469688 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac5d7790-506b-40b0-9721-6cff85ff053e-logs\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.469717 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gcmv\" (UniqueName: \"kubernetes.io/projected/ac5d7790-506b-40b0-9721-6cff85ff053e-kube-api-access-5gcmv\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.469790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-config-data\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.470250 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac5d7790-506b-40b0-9721-6cff85ff053e-logs\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.472992 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.473138 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.473707 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5d7790-506b-40b0-9721-6cff85ff053e-config-data\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.487049 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gcmv\" (UniqueName: \"kubernetes.io/projected/ac5d7790-506b-40b0-9721-6cff85ff053e-kube-api-access-5gcmv\") pod \"nova-metadata-0\" (UID: \"ac5d7790-506b-40b0-9721-6cff85ff053e\") " pod="openstack/nova-metadata-0" Nov 25 12:35:15 crc kubenswrapper[4688]: I1125 12:35:15.615252 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.132425 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 12:35:16 crc kubenswrapper[4688]: W1125 12:35:16.137167 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac5d7790_506b_40b0_9721_6cff85ff053e.slice/crio-29cefff90aa02661372a4924c19983fad45ff7ec24ad2a86114ae4d17552cbe2 WatchSource:0}: Error finding container 29cefff90aa02661372a4924c19983fad45ff7ec24ad2a86114ae4d17552cbe2: Status 404 returned error can't find the container with id 29cefff90aa02661372a4924c19983fad45ff7ec24ad2a86114ae4d17552cbe2 Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.465836 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.593435 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-config-data\") pod \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.593516 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-combined-ca-bundle\") pod \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.593659 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlfpc\" (UniqueName: \"kubernetes.io/projected/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-kube-api-access-zlfpc\") pod \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\" (UID: \"c53b6538-362c-4e79-ab5f-578cfe6b1ab9\") " Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.603781 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-kube-api-access-zlfpc" (OuterVolumeSpecName: "kube-api-access-zlfpc") pod "c53b6538-362c-4e79-ab5f-578cfe6b1ab9" (UID: "c53b6538-362c-4e79-ab5f-578cfe6b1ab9"). InnerVolumeSpecName "kube-api-access-zlfpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.624682 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-config-data" (OuterVolumeSpecName: "config-data") pod "c53b6538-362c-4e79-ab5f-578cfe6b1ab9" (UID: "c53b6538-362c-4e79-ab5f-578cfe6b1ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.629743 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c53b6538-362c-4e79-ab5f-578cfe6b1ab9" (UID: "c53b6538-362c-4e79-ab5f-578cfe6b1ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.695775 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.695815 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.695826 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlfpc\" (UniqueName: \"kubernetes.io/projected/c53b6538-362c-4e79-ab5f-578cfe6b1ab9-kube-api-access-zlfpc\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.751208 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" path="/var/lib/kubelet/pods/e0d796e9-1552-4c93-94d0-cd0ac2cf8aca/volumes" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.908799 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac5d7790-506b-40b0-9721-6cff85ff053e","Type":"ContainerStarted","Data":"9daa12f6d5d86e48eb7aae3c1fae7bea6d40b386fcd0518902a34ec0a563c28d"} Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.909138 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac5d7790-506b-40b0-9721-6cff85ff053e","Type":"ContainerStarted","Data":"6aa5dbdcb158e4f952225fbecf9f37a644f4231fc372a6526562202f7ded8191"} Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.909240 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac5d7790-506b-40b0-9721-6cff85ff053e","Type":"ContainerStarted","Data":"29cefff90aa02661372a4924c19983fad45ff7ec24ad2a86114ae4d17552cbe2"} Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.911233 4688 generic.go:334] "Generic (PLEG): container finished" podID="c53b6538-362c-4e79-ab5f-578cfe6b1ab9" containerID="2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986" exitCode=0 Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.911353 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c53b6538-362c-4e79-ab5f-578cfe6b1ab9","Type":"ContainerDied","Data":"2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986"} Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.911452 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c53b6538-362c-4e79-ab5f-578cfe6b1ab9","Type":"ContainerDied","Data":"55771b4d6cf2ebd04a186cf154cc2200451310a9ada9c5b9b10d07ddf732c028"} Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.911542 4688 scope.go:117] "RemoveContainer" containerID="2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.911703 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.936619 4688 scope.go:117] "RemoveContainer" containerID="2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986" Nov 25 12:35:16 crc kubenswrapper[4688]: E1125 12:35:16.937297 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986\": container with ID starting with 2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986 not found: ID does not exist" containerID="2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.937331 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986"} err="failed to get container status \"2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986\": rpc error: code = NotFound desc = could not find container \"2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986\": container with ID starting with 2e5dc715f9549e781b72ded395a224fdba4506303d0550fe4d069fee086e5986 not found: ID does not exist" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.952086 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.952063726 podStartE2EDuration="1.952063726s" podCreationTimestamp="2025-11-25 12:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:35:16.939987463 +0000 UTC m=+1267.049616351" watchObservedRunningTime="2025-11-25 12:35:16.952063726 +0000 UTC m=+1267.061692594" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.959737 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.970222 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.981485 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:35:16 crc kubenswrapper[4688]: E1125 12:35:16.981986 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53b6538-362c-4e79-ab5f-578cfe6b1ab9" containerName="nova-scheduler-scheduler" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.982010 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53b6538-362c-4e79-ab5f-578cfe6b1ab9" containerName="nova-scheduler-scheduler" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.982218 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53b6538-362c-4e79-ab5f-578cfe6b1ab9" containerName="nova-scheduler-scheduler" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.982957 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.985606 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 12:35:16 crc kubenswrapper[4688]: I1125 12:35:16.995452 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.105976 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca13d90-02a1-44cc-88ed-dabdce12144a-config-data\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.106359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8r8\" (UniqueName: \"kubernetes.io/projected/eca13d90-02a1-44cc-88ed-dabdce12144a-kube-api-access-4t8r8\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.106632 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca13d90-02a1-44cc-88ed-dabdce12144a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.208855 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca13d90-02a1-44cc-88ed-dabdce12144a-config-data\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.209235 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t8r8\" (UniqueName: \"kubernetes.io/projected/eca13d90-02a1-44cc-88ed-dabdce12144a-kube-api-access-4t8r8\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.209309 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca13d90-02a1-44cc-88ed-dabdce12144a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.214711 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca13d90-02a1-44cc-88ed-dabdce12144a-config-data\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.214778 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca13d90-02a1-44cc-88ed-dabdce12144a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.228385 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t8r8\" (UniqueName: \"kubernetes.io/projected/eca13d90-02a1-44cc-88ed-dabdce12144a-kube-api-access-4t8r8\") pod \"nova-scheduler-0\" (UID: \"eca13d90-02a1-44cc-88ed-dabdce12144a\") " pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.314436 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.752266 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 12:35:17 crc kubenswrapper[4688]: W1125 12:35:17.755220 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca13d90_02a1_44cc_88ed_dabdce12144a.slice/crio-c8091ac3f6c3407c05a0c85c6abc247bd04c44193bd8ba928c1ac72d6a436957 WatchSource:0}: Error finding container c8091ac3f6c3407c05a0c85c6abc247bd04c44193bd8ba928c1ac72d6a436957: Status 404 returned error can't find the container with id c8091ac3f6c3407c05a0c85c6abc247bd04c44193bd8ba928c1ac72d6a436957 Nov 25 12:35:17 crc kubenswrapper[4688]: I1125 12:35:17.923994 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eca13d90-02a1-44cc-88ed-dabdce12144a","Type":"ContainerStarted","Data":"c8091ac3f6c3407c05a0c85c6abc247bd04c44193bd8ba928c1ac72d6a436957"} Nov 25 12:35:18 crc kubenswrapper[4688]: I1125 12:35:18.750854 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53b6538-362c-4e79-ab5f-578cfe6b1ab9" path="/var/lib/kubelet/pods/c53b6538-362c-4e79-ab5f-578cfe6b1ab9/volumes" Nov 25 12:35:18 crc kubenswrapper[4688]: I1125 12:35:18.935160 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eca13d90-02a1-44cc-88ed-dabdce12144a","Type":"ContainerStarted","Data":"294a8449c1701bb6a6d17c4b10ef2207eb00d75e6b72f357775dbe449292491c"} Nov 25 12:35:18 crc kubenswrapper[4688]: I1125 12:35:18.962140 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.962099438 podStartE2EDuration="2.962099438s" podCreationTimestamp="2025-11-25 12:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:35:18.95130224 +0000 UTC m=+1269.060931128" watchObservedRunningTime="2025-11-25 12:35:18.962099438 +0000 UTC m=+1269.071728326" Nov 25 12:35:19 crc kubenswrapper[4688]: I1125 12:35:19.641955 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 12:35:19 crc kubenswrapper[4688]: I1125 12:35:19.642093 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e0d796e9-1552-4c93-94d0-cd0ac2cf8aca" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 12:35:20 crc kubenswrapper[4688]: I1125 12:35:20.615991 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 12:35:20 crc kubenswrapper[4688]: I1125 12:35:20.616083 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 12:35:22 crc kubenswrapper[4688]: I1125 12:35:22.294227 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 12:35:22 crc kubenswrapper[4688]: I1125 12:35:22.294778 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 12:35:22 crc kubenswrapper[4688]: I1125 12:35:22.314859 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 12:35:23 crc kubenswrapper[4688]: I1125 12:35:23.309765 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a46e21bc-c734-4c5d-a16a-27860cb65ab0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 12:35:23 crc kubenswrapper[4688]: I1125 12:35:23.309848 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a46e21bc-c734-4c5d-a16a-27860cb65ab0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 12:35:25 crc kubenswrapper[4688]: I1125 12:35:25.616204 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 12:35:25 crc kubenswrapper[4688]: I1125 12:35:25.616696 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 12:35:26 crc kubenswrapper[4688]: I1125 12:35:26.633724 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ac5d7790-506b-40b0-9721-6cff85ff053e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 12:35:26 crc kubenswrapper[4688]: I1125 12:35:26.633732 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ac5d7790-506b-40b0-9721-6cff85ff053e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 12:35:27 crc kubenswrapper[4688]: I1125 12:35:27.315688 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 12:35:27 crc kubenswrapper[4688]: I1125 12:35:27.344670 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 12:35:28 crc kubenswrapper[4688]: I1125 12:35:28.046203 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 12:35:29 crc kubenswrapper[4688]: I1125 12:35:29.051631 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 12:35:32 crc kubenswrapper[4688]: I1125 12:35:32.300005 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 12:35:32 crc kubenswrapper[4688]: I1125 12:35:32.300342 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 12:35:32 crc kubenswrapper[4688]: I1125 12:35:32.300691 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 12:35:32 crc kubenswrapper[4688]: I1125 12:35:32.300740 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 12:35:32 crc kubenswrapper[4688]: I1125 12:35:32.306895 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 12:35:32 crc kubenswrapper[4688]: I1125 12:35:32.306958 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 12:35:35 crc kubenswrapper[4688]: I1125 12:35:35.622848 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 12:35:35 crc kubenswrapper[4688]: I1125 12:35:35.623223 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 12:35:35 crc kubenswrapper[4688]: I1125 12:35:35.628200 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 12:35:35 crc kubenswrapper[4688]: I1125 12:35:35.629355 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 12:35:44 crc kubenswrapper[4688]: I1125 12:35:44.465977 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:35:45 crc kubenswrapper[4688]: I1125 12:35:45.296412 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:35:48 crc kubenswrapper[4688]: I1125 12:35:48.440558 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerName="rabbitmq" containerID="cri-o://2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2" gracePeriod=604797 Nov 25 12:35:49 crc kubenswrapper[4688]: I1125 12:35:49.222202 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 12:35:49 crc kubenswrapper[4688]: I1125 12:35:49.312219 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dbef45ff-afce-462a-8835-30339db0f5a0" containerName="rabbitmq" containerID="cri-o://d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d" gracePeriod=604796 Nov 25 12:35:49 crc kubenswrapper[4688]: I1125 12:35:49.501196 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="dbef45ff-afce-462a-8835-30339db0f5a0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.029864 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.168724 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-config-data\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.169042 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.169200 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plc57\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-kube-api-access-plc57\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.169309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-pod-info\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.169465 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-erlang-cookie\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.169565 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-server-conf\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.169687 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-confd\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.169855 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-plugins\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.170003 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-tls\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.169873 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.170080 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.170284 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-erlang-cookie-secret\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.170367 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-plugins-conf\") pod \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\" (UID: \"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e\") " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.170931 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.171029 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.171437 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.174930 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-pod-info" (OuterVolumeSpecName: "pod-info") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.175409 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.175672 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.184581 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.184782 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-kube-api-access-plc57" (OuterVolumeSpecName: "kube-api-access-plc57") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "kube-api-access-plc57". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.197168 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-config-data" (OuterVolumeSpecName: "config-data") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.239222 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-server-conf" (OuterVolumeSpecName: "server-conf") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.275031 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.275067 4688 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.275081 4688 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.275090 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.275118 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.275130 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plc57\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-kube-api-access-plc57\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.275142 4688 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.275150 4688 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.282833 4688 generic.go:334] "Generic (PLEG): container finished" podID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerID="2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2" exitCode=0 Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.282884 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e","Type":"ContainerDied","Data":"2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2"} Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.282916 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e","Type":"ContainerDied","Data":"1c4e69df94b838daa19aa7464cc5c53a4f5f98999787ff7d701b3c965e5b6e10"} Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.282936 4688 scope.go:117] "RemoveContainer" containerID="2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.283060 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.300808 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" (UID: "dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.311956 4688 scope.go:117] "RemoveContainer" containerID="291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.329222 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.342390 4688 scope.go:117] "RemoveContainer" containerID="2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2" Nov 25 12:35:55 crc kubenswrapper[4688]: E1125 12:35:55.342982 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2\": container with ID starting with 2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2 not found: ID does not exist" containerID="2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.343040 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2"} err="failed to get container status \"2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2\": rpc error: code = NotFound desc = could not find container \"2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2\": container with ID starting with 2f9f3032ddd8dc8ec0107fd6fa935ee6f9e5055048d55a1b03b0e2eb2079bfe2 not found: ID does not exist" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.343085 4688 scope.go:117] "RemoveContainer" containerID="291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45" Nov 25 12:35:55 crc kubenswrapper[4688]: E1125 12:35:55.343475 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45\": container with ID starting with 291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45 not found: ID does not exist" containerID="291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.343501 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45"} err="failed to get container status \"291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45\": rpc error: code = NotFound desc = could not find container \"291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45\": container with ID starting with 291632f010db7cf6dfb19ecf8575f479ed58ef2546b6f80ecd2df144da346f45 not found: ID does not exist" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.376385 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.376782 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.625822 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.643098 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.659990 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:35:55 crc kubenswrapper[4688]: E1125 12:35:55.660414 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerName="rabbitmq" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.660431 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerName="rabbitmq" Nov 25 12:35:55 crc kubenswrapper[4688]: E1125 12:35:55.660441 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerName="setup-container" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.660447 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerName="setup-container" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.660683 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" containerName="rabbitmq" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.661693 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.671573 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.675461 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.675681 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.675829 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.676001 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.679518 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.679830 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jhscs" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.680478 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795490 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795607 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795666 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795701 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlfrz\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-kube-api-access-tlfrz\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795741 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-config-data\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795762 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795788 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795849 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795877 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/24997c07-a110-43df-accd-9daeeff9a29c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795913 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/24997c07-a110-43df-accd-9daeeff9a29c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.795952 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.870902 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.897485 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-config-data\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.897575 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.897622 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.897735 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.897765 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/24997c07-a110-43df-accd-9daeeff9a29c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.897836 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/24997c07-a110-43df-accd-9daeeff9a29c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.897887 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.897951 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.898015 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.898078 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.898130 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlfrz\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-kube-api-access-tlfrz\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.898581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-config-data\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.899098 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.899259 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.899941 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.899978 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.900725 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/24997c07-a110-43df-accd-9daeeff9a29c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.911319 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/24997c07-a110-43df-accd-9daeeff9a29c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.911899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/24997c07-a110-43df-accd-9daeeff9a29c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.923324 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlfrz\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-kube-api-access-tlfrz\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.926162 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.926737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/24997c07-a110-43df-accd-9daeeff9a29c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.952569 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"24997c07-a110-43df-accd-9daeeff9a29c\") " pod="openstack/rabbitmq-server-0" Nov 25 12:35:55 crc kubenswrapper[4688]: I1125 12:35:55.999570 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbef45ff-afce-462a-8835-30339db0f5a0-pod-info\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:55.999874 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbef45ff-afce-462a-8835-30339db0f5a0-erlang-cookie-secret\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:55.999975 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-confd\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.000021 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.000073 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-plugins\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.000174 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-server-conf\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.000212 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnvjg\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-kube-api-access-rnvjg\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.000267 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-tls\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.000339 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-erlang-cookie\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.000377 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-config-data\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.000403 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-plugins-conf\") pod \"dbef45ff-afce-462a-8835-30339db0f5a0\" (UID: \"dbef45ff-afce-462a-8835-30339db0f5a0\") " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.001092 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.001721 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.006264 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.007774 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.007864 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dbef45ff-afce-462a-8835-30339db0f5a0-pod-info" (OuterVolumeSpecName: "pod-info") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.009239 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.010490 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.017784 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-kube-api-access-rnvjg" (OuterVolumeSpecName: "kube-api-access-rnvjg") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "kube-api-access-rnvjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.024460 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbef45ff-afce-462a-8835-30339db0f5a0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.059946 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-config-data" (OuterVolumeSpecName: "config-data") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.092976 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-server-conf" (OuterVolumeSpecName: "server-conf") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104337 4688 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104379 4688 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbef45ff-afce-462a-8835-30339db0f5a0-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104391 4688 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbef45ff-afce-462a-8835-30339db0f5a0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104422 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104438 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104449 4688 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104459 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnvjg\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-kube-api-access-rnvjg\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104472 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104482 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.104491 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbef45ff-afce-462a-8835-30339db0f5a0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.146205 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.159766 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dbef45ff-afce-462a-8835-30339db0f5a0" (UID: "dbef45ff-afce-462a-8835-30339db0f5a0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.206591 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbef45ff-afce-462a-8835-30339db0f5a0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.206616 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.294084 4688 generic.go:334] "Generic (PLEG): container finished" podID="dbef45ff-afce-462a-8835-30339db0f5a0" containerID="d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d" exitCode=0 Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.294138 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.294152 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbef45ff-afce-462a-8835-30339db0f5a0","Type":"ContainerDied","Data":"d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d"} Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.294182 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbef45ff-afce-462a-8835-30339db0f5a0","Type":"ContainerDied","Data":"eeb3de703bdb8c0a41cdeb567749070a7fb9f4bd304224a83d58347695a0b389"} Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.294219 4688 scope.go:117] "RemoveContainer" containerID="d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.317660 4688 scope.go:117] "RemoveContainer" containerID="f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.342919 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.344928 4688 scope.go:117] "RemoveContainer" containerID="d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d" Nov 25 12:35:56 crc kubenswrapper[4688]: E1125 12:35:56.345355 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d\": container with ID starting with d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d not found: ID does not exist" containerID="d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.345376 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d"} err="failed to get container status \"d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d\": rpc error: code = NotFound desc = could not find container \"d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d\": container with ID starting with d6bddb83dcc5a77cb7ec56569c7d39905dc9effa8c40a4b72426bc26559b7e4d not found: ID does not exist" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.345422 4688 scope.go:117] "RemoveContainer" containerID="f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89" Nov 25 12:35:56 crc kubenswrapper[4688]: E1125 12:35:56.345859 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89\": container with ID starting with f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89 not found: ID does not exist" containerID="f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.345884 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89"} err="failed to get container status \"f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89\": rpc error: code = NotFound desc = could not find container \"f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89\": container with ID starting with f25e65cfc176890ec414ca8ec1206753341d9dc18066dd59aaf990016390fe89 not found: ID does not exist" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.357545 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.385928 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:35:56 crc kubenswrapper[4688]: E1125 12:35:56.386340 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbef45ff-afce-462a-8835-30339db0f5a0" containerName="rabbitmq" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.386352 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbef45ff-afce-462a-8835-30339db0f5a0" containerName="rabbitmq" Nov 25 12:35:56 crc kubenswrapper[4688]: E1125 12:35:56.386368 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbef45ff-afce-462a-8835-30339db0f5a0" containerName="setup-container" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.386373 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbef45ff-afce-462a-8835-30339db0f5a0" containerName="setup-container" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.386559 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbef45ff-afce-462a-8835-30339db0f5a0" containerName="rabbitmq" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.387537 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.390996 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.391160 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.391346 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jlqvn" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.391446 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.391568 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.391694 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.391793 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.395175 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.513489 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnhlk\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-kube-api-access-bnhlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.513587 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.513679 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.513764 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.513823 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31cb28aa-9d13-4a28-b87d-85abb3af9cef-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.513876 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31cb28aa-9d13-4a28-b87d-85abb3af9cef-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.514023 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.514092 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.514140 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.514195 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.514249 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.616821 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.616917 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.618253 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnhlk\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-kube-api-access-bnhlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.618308 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.618507 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.618652 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.618735 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.618956 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31cb28aa-9d13-4a28-b87d-85abb3af9cef-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.619003 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.619398 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31cb28aa-9d13-4a28-b87d-85abb3af9cef-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.619478 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.619577 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.619616 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.620094 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.620200 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.620538 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.622707 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31cb28aa-9d13-4a28-b87d-85abb3af9cef-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.622904 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.623948 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31cb28aa-9d13-4a28-b87d-85abb3af9cef-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.627696 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.628504 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31cb28aa-9d13-4a28-b87d-85abb3af9cef-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.649589 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnhlk\" (UniqueName: \"kubernetes.io/projected/31cb28aa-9d13-4a28-b87d-85abb3af9cef-kube-api-access-bnhlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.652822 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"31cb28aa-9d13-4a28-b87d-85abb3af9cef\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.714668 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.749327 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e" path="/var/lib/kubelet/pods/dbbbfdc1-1c5b-4d8a-bffb-b7b1c0ed0b4e/volumes" Nov 25 12:35:56 crc kubenswrapper[4688]: I1125 12:35:56.750230 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbef45ff-afce-462a-8835-30339db0f5a0" path="/var/lib/kubelet/pods/dbef45ff-afce-462a-8835-30339db0f5a0/volumes" Nov 25 12:35:57 crc kubenswrapper[4688]: I1125 12:35:57.090306 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 12:35:57 crc kubenswrapper[4688]: I1125 12:35:57.213147 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 12:35:57 crc kubenswrapper[4688]: W1125 12:35:57.219304 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31cb28aa_9d13_4a28_b87d_85abb3af9cef.slice/crio-c5b0ce3fee386bc9fffa4488aebd82c12558b452293c8ccd48a67f101272d166 WatchSource:0}: Error finding container c5b0ce3fee386bc9fffa4488aebd82c12558b452293c8ccd48a67f101272d166: Status 404 returned error can't find the container with id c5b0ce3fee386bc9fffa4488aebd82c12558b452293c8ccd48a67f101272d166 Nov 25 12:35:57 crc kubenswrapper[4688]: I1125 12:35:57.304758 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24997c07-a110-43df-accd-9daeeff9a29c","Type":"ContainerStarted","Data":"3046ef1063d3eb0e7ce60fe9c53e489d5882b864317cf072806fb6e52a9a2e94"} Nov 25 12:35:57 crc kubenswrapper[4688]: I1125 12:35:57.306101 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31cb28aa-9d13-4a28-b87d-85abb3af9cef","Type":"ContainerStarted","Data":"c5b0ce3fee386bc9fffa4488aebd82c12558b452293c8ccd48a67f101272d166"} Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.324808 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24997c07-a110-43df-accd-9daeeff9a29c","Type":"ContainerStarted","Data":"0c3fceb69d79adc3d38d4638b9c0f36b4c8409f807a4b59d74c45c9486b4fc69"} Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.326639 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31cb28aa-9d13-4a28-b87d-85abb3af9cef","Type":"ContainerStarted","Data":"868938f471c89726f0deb392b6e7336d042358a50b66d37e560205ca8f7ea3cc"} Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.352050 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-b7msp"] Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.353859 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: W1125 12:35:59.357993 4688 reflector.go:561] object-"openstack"/"openstack-edpm-ipam": failed to list *v1.ConfigMap: configmaps "openstack-edpm-ipam" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 25 12:35:59 crc kubenswrapper[4688]: E1125 12:35:59.358044 4688 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-edpm-ipam\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openstack-edpm-ipam\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.385219 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-b7msp"] Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.473548 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-config\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.473861 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.474041 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.474629 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlqdd\" (UniqueName: \"kubernetes.io/projected/93963b9e-9914-41bb-93fd-3188477f3853-kube-api-access-wlqdd\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.474980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.475050 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.475070 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.577061 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.577360 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.577430 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.577515 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-config\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.577648 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.577734 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.577821 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlqdd\" (UniqueName: \"kubernetes.io/projected/93963b9e-9914-41bb-93fd-3188477f3853-kube-api-access-wlqdd\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.578358 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-config\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.578365 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.578442 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.578619 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.578764 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:35:59 crc kubenswrapper[4688]: I1125 12:35:59.596852 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlqdd\" (UniqueName: \"kubernetes.io/projected/93963b9e-9914-41bb-93fd-3188477f3853-kube-api-access-wlqdd\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:36:00 crc kubenswrapper[4688]: E1125 12:36:00.578106 4688 configmap.go:193] Couldn't get configMap openstack/openstack-edpm-ipam: failed to sync configmap cache: timed out waiting for the condition Nov 25 12:36:00 crc kubenswrapper[4688]: E1125 12:36:00.578234 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam podName:93963b9e-9914-41bb-93fd-3188477f3853 nodeName:}" failed. No retries permitted until 2025-11-25 12:36:01.078206923 +0000 UTC m=+1311.187835791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-edpm-ipam" (UniqueName: "kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam") pod "dnsmasq-dns-7d84b4d45c-b7msp" (UID: "93963b9e-9914-41bb-93fd-3188477f3853") : failed to sync configmap cache: timed out waiting for the condition Nov 25 12:36:00 crc kubenswrapper[4688]: I1125 12:36:00.685241 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 12:36:01 crc kubenswrapper[4688]: I1125 12:36:01.103913 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:36:01 crc kubenswrapper[4688]: I1125 12:36:01.104762 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-b7msp\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:36:01 crc kubenswrapper[4688]: I1125 12:36:01.176706 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:36:01 crc kubenswrapper[4688]: I1125 12:36:01.610182 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-b7msp"] Nov 25 12:36:01 crc kubenswrapper[4688]: W1125 12:36:01.611747 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93963b9e_9914_41bb_93fd_3188477f3853.slice/crio-a1a604a4ab371af6222c6eaee8ac2af298c244e34359006742750098ad842ad9 WatchSource:0}: Error finding container a1a604a4ab371af6222c6eaee8ac2af298c244e34359006742750098ad842ad9: Status 404 returned error can't find the container with id a1a604a4ab371af6222c6eaee8ac2af298c244e34359006742750098ad842ad9 Nov 25 12:36:02 crc kubenswrapper[4688]: I1125 12:36:02.361993 4688 generic.go:334] "Generic (PLEG): container finished" podID="93963b9e-9914-41bb-93fd-3188477f3853" containerID="0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6" exitCode=0 Nov 25 12:36:02 crc kubenswrapper[4688]: I1125 12:36:02.362142 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" event={"ID":"93963b9e-9914-41bb-93fd-3188477f3853","Type":"ContainerDied","Data":"0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6"} Nov 25 12:36:02 crc kubenswrapper[4688]: I1125 12:36:02.362267 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" event={"ID":"93963b9e-9914-41bb-93fd-3188477f3853","Type":"ContainerStarted","Data":"a1a604a4ab371af6222c6eaee8ac2af298c244e34359006742750098ad842ad9"} Nov 25 12:36:03 crc kubenswrapper[4688]: I1125 12:36:03.372699 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" event={"ID":"93963b9e-9914-41bb-93fd-3188477f3853","Type":"ContainerStarted","Data":"32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb"} Nov 25 12:36:03 crc kubenswrapper[4688]: I1125 12:36:03.373227 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:36:03 crc kubenswrapper[4688]: I1125 12:36:03.399239 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" podStartSLOduration=4.399221125 podStartE2EDuration="4.399221125s" podCreationTimestamp="2025-11-25 12:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:36:03.389014334 +0000 UTC m=+1313.498643202" watchObservedRunningTime="2025-11-25 12:36:03.399221125 +0000 UTC m=+1313.508849993" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.178893 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.264782 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6"] Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.265084 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" podUID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" containerName="dnsmasq-dns" containerID="cri-o://692fd6b6f590e930938cc16fa00a4d7b50a8be0f4cda34b2d1625f6cef947382" gracePeriod=10 Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.385439 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-ctv5w"] Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.387303 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.417375 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-ctv5w"] Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.460917 4688 generic.go:334] "Generic (PLEG): container finished" podID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" containerID="692fd6b6f590e930938cc16fa00a4d7b50a8be0f4cda34b2d1625f6cef947382" exitCode=0 Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.460955 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" event={"ID":"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04","Type":"ContainerDied","Data":"692fd6b6f590e930938cc16fa00a4d7b50a8be0f4cda34b2d1625f6cef947382"} Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.509291 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.509415 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-config\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.509494 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.509517 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.509568 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.509601 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrkv2\" (UniqueName: \"kubernetes.io/projected/26555a3c-6063-42b0-a1ce-18bebfe41afb-kube-api-access-xrkv2\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.509629 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.610890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.610965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrkv2\" (UniqueName: \"kubernetes.io/projected/26555a3c-6063-42b0-a1ce-18bebfe41afb-kube-api-access-xrkv2\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.611012 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.611065 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.611135 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-config\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.611216 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.611248 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.612389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-config\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.612406 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.612389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.612512 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.613481 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.613611 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26555a3c-6063-42b0-a1ce-18bebfe41afb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.631599 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrkv2\" (UniqueName: \"kubernetes.io/projected/26555a3c-6063-42b0-a1ce-18bebfe41afb-kube-api-access-xrkv2\") pod \"dnsmasq-dns-6f6df4f56c-ctv5w\" (UID: \"26555a3c-6063-42b0-a1ce-18bebfe41afb\") " pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.756364 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.861631 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.917210 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-svc\") pod \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.917273 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-config\") pod \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.917495 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-swift-storage-0\") pod \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.918261 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55r4l\" (UniqueName: \"kubernetes.io/projected/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-kube-api-access-55r4l\") pod \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.918362 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-sb\") pod \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.918396 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-nb\") pod \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.925002 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-kube-api-access-55r4l" (OuterVolumeSpecName: "kube-api-access-55r4l") pod "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" (UID: "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04"). InnerVolumeSpecName "kube-api-access-55r4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.985120 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" (UID: "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:11 crc kubenswrapper[4688]: I1125 12:36:11.986408 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" (UID: "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.021027 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-config" (OuterVolumeSpecName: "config") pod "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" (UID: "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.021432 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-config\") pod \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\" (UID: \"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04\") " Nov 25 12:36:12 crc kubenswrapper[4688]: W1125 12:36:12.022054 4688 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04/volumes/kubernetes.io~configmap/config Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.022281 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-config" (OuterVolumeSpecName: "config") pod "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" (UID: "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.022290 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.022336 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" (UID: "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.022346 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.022374 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.022443 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55r4l\" (UniqueName: \"kubernetes.io/projected/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-kube-api-access-55r4l\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.025628 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" (UID: "d31d0dcf-fc63-4bf0-b5c5-37c63673aa04"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.124593 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.124633 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:12 crc kubenswrapper[4688]: W1125 12:36:12.251640 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26555a3c_6063_42b0_a1ce_18bebfe41afb.slice/crio-3a57733f15538c0f1fcbc72e9b4115a80bc643f27e5f35a78e854623b11bc366 WatchSource:0}: Error finding container 3a57733f15538c0f1fcbc72e9b4115a80bc643f27e5f35a78e854623b11bc366: Status 404 returned error can't find the container with id 3a57733f15538c0f1fcbc72e9b4115a80bc643f27e5f35a78e854623b11bc366 Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.256743 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-ctv5w"] Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.475638 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" event={"ID":"d31d0dcf-fc63-4bf0-b5c5-37c63673aa04","Type":"ContainerDied","Data":"ec07be15a87118d7f42751c6a2dcbcef1467f92fe359c0d1f48e381d9d3e2ecb"} Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.476030 4688 scope.go:117] "RemoveContainer" containerID="692fd6b6f590e930938cc16fa00a4d7b50a8be0f4cda34b2d1625f6cef947382" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.475879 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.478918 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" event={"ID":"26555a3c-6063-42b0-a1ce-18bebfe41afb","Type":"ContainerStarted","Data":"3a57733f15538c0f1fcbc72e9b4115a80bc643f27e5f35a78e854623b11bc366"} Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.520014 4688 scope.go:117] "RemoveContainer" containerID="5ec360c1700eed3a816c1a9c8377f3b7882b747b39fe7a63f61ca462df2ac216" Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.537106 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6"] Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.549132 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-xfkn6"] Nov 25 12:36:12 crc kubenswrapper[4688]: I1125 12:36:12.774364 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" path="/var/lib/kubelet/pods/d31d0dcf-fc63-4bf0-b5c5-37c63673aa04/volumes" Nov 25 12:36:13 crc kubenswrapper[4688]: I1125 12:36:13.488653 4688 generic.go:334] "Generic (PLEG): container finished" podID="26555a3c-6063-42b0-a1ce-18bebfe41afb" containerID="afab8d390cb970bf65016e5b5fab764024cb227b2ccbb78402e014a1f29c9e0b" exitCode=0 Nov 25 12:36:13 crc kubenswrapper[4688]: I1125 12:36:13.488961 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" event={"ID":"26555a3c-6063-42b0-a1ce-18bebfe41afb","Type":"ContainerDied","Data":"afab8d390cb970bf65016e5b5fab764024cb227b2ccbb78402e014a1f29c9e0b"} Nov 25 12:36:14 crc kubenswrapper[4688]: I1125 12:36:14.503292 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" event={"ID":"26555a3c-6063-42b0-a1ce-18bebfe41afb","Type":"ContainerStarted","Data":"ba271f625c5e9bcac6b79c154bd8ccd1fd2a9178817aac607e9f06fdbddd3d2c"} Nov 25 12:36:14 crc kubenswrapper[4688]: I1125 12:36:14.503663 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:14 crc kubenswrapper[4688]: I1125 12:36:14.526834 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" podStartSLOduration=3.526805699 podStartE2EDuration="3.526805699s" podCreationTimestamp="2025-11-25 12:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:36:14.525151905 +0000 UTC m=+1324.634780773" watchObservedRunningTime="2025-11-25 12:36:14.526805699 +0000 UTC m=+1324.636434577" Nov 25 12:36:21 crc kubenswrapper[4688]: I1125 12:36:21.757676 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-ctv5w" Nov 25 12:36:21 crc kubenswrapper[4688]: I1125 12:36:21.829668 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-b7msp"] Nov 25 12:36:21 crc kubenswrapper[4688]: I1125 12:36:21.829958 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" podUID="93963b9e-9914-41bb-93fd-3188477f3853" containerName="dnsmasq-dns" containerID="cri-o://32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb" gracePeriod=10 Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.354267 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.418866 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-swift-storage-0\") pod \"93963b9e-9914-41bb-93fd-3188477f3853\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.418933 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-sb\") pod \"93963b9e-9914-41bb-93fd-3188477f3853\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.419020 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam\") pod \"93963b9e-9914-41bb-93fd-3188477f3853\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.419061 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-nb\") pod \"93963b9e-9914-41bb-93fd-3188477f3853\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.419123 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-svc\") pod \"93963b9e-9914-41bb-93fd-3188477f3853\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.419190 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlqdd\" (UniqueName: \"kubernetes.io/projected/93963b9e-9914-41bb-93fd-3188477f3853-kube-api-access-wlqdd\") pod \"93963b9e-9914-41bb-93fd-3188477f3853\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.419261 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-config\") pod \"93963b9e-9914-41bb-93fd-3188477f3853\" (UID: \"93963b9e-9914-41bb-93fd-3188477f3853\") " Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.437092 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93963b9e-9914-41bb-93fd-3188477f3853-kube-api-access-wlqdd" (OuterVolumeSpecName: "kube-api-access-wlqdd") pod "93963b9e-9914-41bb-93fd-3188477f3853" (UID: "93963b9e-9914-41bb-93fd-3188477f3853"). InnerVolumeSpecName "kube-api-access-wlqdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.478983 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93963b9e-9914-41bb-93fd-3188477f3853" (UID: "93963b9e-9914-41bb-93fd-3188477f3853"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.483808 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "93963b9e-9914-41bb-93fd-3188477f3853" (UID: "93963b9e-9914-41bb-93fd-3188477f3853"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.488271 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93963b9e-9914-41bb-93fd-3188477f3853" (UID: "93963b9e-9914-41bb-93fd-3188477f3853"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.488920 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93963b9e-9914-41bb-93fd-3188477f3853" (UID: "93963b9e-9914-41bb-93fd-3188477f3853"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.490812 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "93963b9e-9914-41bb-93fd-3188477f3853" (UID: "93963b9e-9914-41bb-93fd-3188477f3853"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.510159 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-config" (OuterVolumeSpecName: "config") pod "93963b9e-9914-41bb-93fd-3188477f3853" (UID: "93963b9e-9914-41bb-93fd-3188477f3853"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.520852 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-config\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.520891 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.520907 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.520920 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.520929 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.520937 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93963b9e-9914-41bb-93fd-3188477f3853-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.520946 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlqdd\" (UniqueName: \"kubernetes.io/projected/93963b9e-9914-41bb-93fd-3188477f3853-kube-api-access-wlqdd\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.589041 4688 generic.go:334] "Generic (PLEG): container finished" podID="93963b9e-9914-41bb-93fd-3188477f3853" containerID="32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb" exitCode=0 Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.589087 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" event={"ID":"93963b9e-9914-41bb-93fd-3188477f3853","Type":"ContainerDied","Data":"32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb"} Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.589113 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" event={"ID":"93963b9e-9914-41bb-93fd-3188477f3853","Type":"ContainerDied","Data":"a1a604a4ab371af6222c6eaee8ac2af298c244e34359006742750098ad842ad9"} Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.589128 4688 scope.go:117] "RemoveContainer" containerID="32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.589244 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-b7msp" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.629477 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-b7msp"] Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.633178 4688 scope.go:117] "RemoveContainer" containerID="0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.638714 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-b7msp"] Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.659356 4688 scope.go:117] "RemoveContainer" containerID="32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb" Nov 25 12:36:22 crc kubenswrapper[4688]: E1125 12:36:22.663879 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb\": container with ID starting with 32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb not found: ID does not exist" containerID="32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.663923 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb"} err="failed to get container status \"32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb\": rpc error: code = NotFound desc = could not find container \"32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb\": container with ID starting with 32c54f0499abd6ea742a451cab00f4b72c1e38ecfd0a44b654789df739921dbb not found: ID does not exist" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.663946 4688 scope.go:117] "RemoveContainer" containerID="0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6" Nov 25 12:36:22 crc kubenswrapper[4688]: E1125 12:36:22.664804 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6\": container with ID starting with 0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6 not found: ID does not exist" containerID="0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.664862 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6"} err="failed to get container status \"0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6\": rpc error: code = NotFound desc = could not find container \"0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6\": container with ID starting with 0f51a8405cc263489ecdec33b5f362eadfe27087f58fe6bf3280655655560aa6 not found: ID does not exist" Nov 25 12:36:22 crc kubenswrapper[4688]: I1125 12:36:22.756983 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93963b9e-9914-41bb-93fd-3188477f3853" path="/var/lib/kubelet/pods/93963b9e-9914-41bb-93fd-3188477f3853/volumes" Nov 25 12:36:22 crc kubenswrapper[4688]: E1125 12:36:22.837086 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93963b9e_9914_41bb_93fd_3188477f3853.slice/crio-a1a604a4ab371af6222c6eaee8ac2af298c244e34359006742750098ad842ad9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93963b9e_9914_41bb_93fd_3188477f3853.slice\": RecentStats: unable to find data in memory cache]" Nov 25 12:36:31 crc kubenswrapper[4688]: I1125 12:36:31.673694 4688 generic.go:334] "Generic (PLEG): container finished" podID="31cb28aa-9d13-4a28-b87d-85abb3af9cef" containerID="868938f471c89726f0deb392b6e7336d042358a50b66d37e560205ca8f7ea3cc" exitCode=0 Nov 25 12:36:31 crc kubenswrapper[4688]: I1125 12:36:31.673750 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31cb28aa-9d13-4a28-b87d-85abb3af9cef","Type":"ContainerDied","Data":"868938f471c89726f0deb392b6e7336d042358a50b66d37e560205ca8f7ea3cc"} Nov 25 12:36:31 crc kubenswrapper[4688]: I1125 12:36:31.676004 4688 generic.go:334] "Generic (PLEG): container finished" podID="24997c07-a110-43df-accd-9daeeff9a29c" containerID="0c3fceb69d79adc3d38d4638b9c0f36b4c8409f807a4b59d74c45c9486b4fc69" exitCode=0 Nov 25 12:36:31 crc kubenswrapper[4688]: I1125 12:36:31.676036 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24997c07-a110-43df-accd-9daeeff9a29c","Type":"ContainerDied","Data":"0c3fceb69d79adc3d38d4638b9c0f36b4c8409f807a4b59d74c45c9486b4fc69"} Nov 25 12:36:32 crc kubenswrapper[4688]: I1125 12:36:32.691118 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"31cb28aa-9d13-4a28-b87d-85abb3af9cef","Type":"ContainerStarted","Data":"ec2de4dd82a39e8f70a7d0c901f9ae41ccb1775fed607c722bd81eecbadd52bf"} Nov 25 12:36:32 crc kubenswrapper[4688]: I1125 12:36:32.691704 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:36:32 crc kubenswrapper[4688]: I1125 12:36:32.694155 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24997c07-a110-43df-accd-9daeeff9a29c","Type":"ContainerStarted","Data":"e157b651ac7bf5122c4eb3e7c6d57d476e8e4fd5d33cf1b9796211377cfc70a9"} Nov 25 12:36:32 crc kubenswrapper[4688]: I1125 12:36:32.694384 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 12:36:32 crc kubenswrapper[4688]: I1125 12:36:32.727952 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.727929643 podStartE2EDuration="36.727929643s" podCreationTimestamp="2025-11-25 12:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:36:32.723409283 +0000 UTC m=+1342.833038161" watchObservedRunningTime="2025-11-25 12:36:32.727929643 +0000 UTC m=+1342.837558511" Nov 25 12:36:32 crc kubenswrapper[4688]: I1125 12:36:32.753438 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.75341716 podStartE2EDuration="37.75341716s" podCreationTimestamp="2025-11-25 12:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:36:32.747885473 +0000 UTC m=+1342.857514351" watchObservedRunningTime="2025-11-25 12:36:32.75341716 +0000 UTC m=+1342.863046028" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.680275 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d"] Nov 25 12:36:33 crc kubenswrapper[4688]: E1125 12:36:33.687001 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" containerName="init" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.687024 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" containerName="init" Nov 25 12:36:33 crc kubenswrapper[4688]: E1125 12:36:33.687044 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93963b9e-9914-41bb-93fd-3188477f3853" containerName="dnsmasq-dns" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.687051 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="93963b9e-9914-41bb-93fd-3188477f3853" containerName="dnsmasq-dns" Nov 25 12:36:33 crc kubenswrapper[4688]: E1125 12:36:33.687070 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" containerName="dnsmasq-dns" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.687076 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" containerName="dnsmasq-dns" Nov 25 12:36:33 crc kubenswrapper[4688]: E1125 12:36:33.687088 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93963b9e-9914-41bb-93fd-3188477f3853" containerName="init" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.687093 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="93963b9e-9914-41bb-93fd-3188477f3853" containerName="init" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.687250 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d31d0dcf-fc63-4bf0-b5c5-37c63673aa04" containerName="dnsmasq-dns" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.687278 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="93963b9e-9914-41bb-93fd-3188477f3853" containerName="dnsmasq-dns" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.687892 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.693354 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.695034 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.695250 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.695461 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.699159 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d"] Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.850613 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.850799 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.850875 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.851026 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6klbn\" (UniqueName: \"kubernetes.io/projected/e2224f64-766c-4746-b65f-8e235c609a74-kube-api-access-6klbn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.954076 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.954166 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.954209 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.954288 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6klbn\" (UniqueName: \"kubernetes.io/projected/e2224f64-766c-4746-b65f-8e235c609a74-kube-api-access-6klbn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.961134 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.961615 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.961620 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:33 crc kubenswrapper[4688]: I1125 12:36:33.972719 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6klbn\" (UniqueName: \"kubernetes.io/projected/e2224f64-766c-4746-b65f-8e235c609a74-kube-api-access-6klbn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:34 crc kubenswrapper[4688]: I1125 12:36:34.012615 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:34 crc kubenswrapper[4688]: I1125 12:36:34.587869 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d"] Nov 25 12:36:34 crc kubenswrapper[4688]: I1125 12:36:34.592824 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:36:34 crc kubenswrapper[4688]: I1125 12:36:34.735793 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" event={"ID":"e2224f64-766c-4746-b65f-8e235c609a74","Type":"ContainerStarted","Data":"d838e0226dcca3f6f1539a5cb014039c94542d2201f77261c55894e5a1544409"} Nov 25 12:36:45 crc kubenswrapper[4688]: I1125 12:36:45.849548 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" event={"ID":"e2224f64-766c-4746-b65f-8e235c609a74","Type":"ContainerStarted","Data":"5a039ef6fb618d20e7d634a5402a74127d8854c38d744ba985fea11998a5bdb1"} Nov 25 12:36:45 crc kubenswrapper[4688]: I1125 12:36:45.870127 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" podStartSLOduration=2.2068076420000002 podStartE2EDuration="12.8701073s" podCreationTimestamp="2025-11-25 12:36:33 +0000 UTC" firstStartedPulling="2025-11-25 12:36:34.592621287 +0000 UTC m=+1344.702250155" lastFinishedPulling="2025-11-25 12:36:45.255920945 +0000 UTC m=+1355.365549813" observedRunningTime="2025-11-25 12:36:45.864400008 +0000 UTC m=+1355.974028866" watchObservedRunningTime="2025-11-25 12:36:45.8701073 +0000 UTC m=+1355.979736168" Nov 25 12:36:46 crc kubenswrapper[4688]: I1125 12:36:46.014721 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 12:36:46 crc kubenswrapper[4688]: I1125 12:36:46.717718 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 12:36:56 crc kubenswrapper[4688]: I1125 12:36:56.970315 4688 generic.go:334] "Generic (PLEG): container finished" podID="e2224f64-766c-4746-b65f-8e235c609a74" containerID="5a039ef6fb618d20e7d634a5402a74127d8854c38d744ba985fea11998a5bdb1" exitCode=0 Nov 25 12:36:56 crc kubenswrapper[4688]: I1125 12:36:56.970364 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" event={"ID":"e2224f64-766c-4746-b65f-8e235c609a74","Type":"ContainerDied","Data":"5a039ef6fb618d20e7d634a5402a74127d8854c38d744ba985fea11998a5bdb1"} Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.440537 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.546825 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6klbn\" (UniqueName: \"kubernetes.io/projected/e2224f64-766c-4746-b65f-8e235c609a74-kube-api-access-6klbn\") pod \"e2224f64-766c-4746-b65f-8e235c609a74\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.546981 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-repo-setup-combined-ca-bundle\") pod \"e2224f64-766c-4746-b65f-8e235c609a74\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.547127 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-inventory\") pod \"e2224f64-766c-4746-b65f-8e235c609a74\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.547198 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-ssh-key\") pod \"e2224f64-766c-4746-b65f-8e235c609a74\" (UID: \"e2224f64-766c-4746-b65f-8e235c609a74\") " Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.551832 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2224f64-766c-4746-b65f-8e235c609a74-kube-api-access-6klbn" (OuterVolumeSpecName: "kube-api-access-6klbn") pod "e2224f64-766c-4746-b65f-8e235c609a74" (UID: "e2224f64-766c-4746-b65f-8e235c609a74"). InnerVolumeSpecName "kube-api-access-6klbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.552196 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e2224f64-766c-4746-b65f-8e235c609a74" (UID: "e2224f64-766c-4746-b65f-8e235c609a74"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.572556 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-inventory" (OuterVolumeSpecName: "inventory") pod "e2224f64-766c-4746-b65f-8e235c609a74" (UID: "e2224f64-766c-4746-b65f-8e235c609a74"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.574365 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e2224f64-766c-4746-b65f-8e235c609a74" (UID: "e2224f64-766c-4746-b65f-8e235c609a74"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.649110 4688 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.649145 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.649155 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2224f64-766c-4746-b65f-8e235c609a74-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.649165 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6klbn\" (UniqueName: \"kubernetes.io/projected/e2224f64-766c-4746-b65f-8e235c609a74-kube-api-access-6klbn\") on node \"crc\" DevicePath \"\"" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.992570 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" event={"ID":"e2224f64-766c-4746-b65f-8e235c609a74","Type":"ContainerDied","Data":"d838e0226dcca3f6f1539a5cb014039c94542d2201f77261c55894e5a1544409"} Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.992610 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d" Nov 25 12:36:58 crc kubenswrapper[4688]: I1125 12:36:58.992624 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d838e0226dcca3f6f1539a5cb014039c94542d2201f77261c55894e5a1544409" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.068582 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz"] Nov 25 12:36:59 crc kubenswrapper[4688]: E1125 12:36:59.069099 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2224f64-766c-4746-b65f-8e235c609a74" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.069122 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2224f64-766c-4746-b65f-8e235c609a74" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.069378 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2224f64-766c-4746-b65f-8e235c609a74" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.070276 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.073443 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.073591 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.076106 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.077773 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.084312 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz"] Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.164916 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.165062 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf2b\" (UniqueName: \"kubernetes.io/projected/567fcefc-5ba1-449d-959c-3209a8d586a9-kube-api-access-5zf2b\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.165089 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.267153 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.267244 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zf2b\" (UniqueName: \"kubernetes.io/projected/567fcefc-5ba1-449d-959c-3209a8d586a9-kube-api-access-5zf2b\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.267278 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.276116 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.282450 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.286161 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zf2b\" (UniqueName: \"kubernetes.io/projected/567fcefc-5ba1-449d-959c-3209a8d586a9-kube-api-access-5zf2b\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8b8pz\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.394728 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:36:59 crc kubenswrapper[4688]: I1125 12:36:59.914693 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz"] Nov 25 12:37:00 crc kubenswrapper[4688]: I1125 12:37:00.002553 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" event={"ID":"567fcefc-5ba1-449d-959c-3209a8d586a9","Type":"ContainerStarted","Data":"778ba1e233c07e6726a46954d9e3f7aeba09d17d2bc8a6e6a9c5e4f4be8f8d5a"} Nov 25 12:37:01 crc kubenswrapper[4688]: I1125 12:37:01.012850 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" event={"ID":"567fcefc-5ba1-449d-959c-3209a8d586a9","Type":"ContainerStarted","Data":"0219af0f0d8b05f548e84843444e5a235b344e17432730d18ba344d163de8c9e"} Nov 25 12:37:01 crc kubenswrapper[4688]: I1125 12:37:01.031263 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" podStartSLOduration=1.605353723 podStartE2EDuration="2.031244239s" podCreationTimestamp="2025-11-25 12:36:59 +0000 UTC" firstStartedPulling="2025-11-25 12:36:59.917750077 +0000 UTC m=+1370.027378945" lastFinishedPulling="2025-11-25 12:37:00.343640593 +0000 UTC m=+1370.453269461" observedRunningTime="2025-11-25 12:37:01.030347035 +0000 UTC m=+1371.139975903" watchObservedRunningTime="2025-11-25 12:37:01.031244239 +0000 UTC m=+1371.140873127" Nov 25 12:37:04 crc kubenswrapper[4688]: I1125 12:37:04.042028 4688 generic.go:334] "Generic (PLEG): container finished" podID="567fcefc-5ba1-449d-959c-3209a8d586a9" containerID="0219af0f0d8b05f548e84843444e5a235b344e17432730d18ba344d163de8c9e" exitCode=0 Nov 25 12:37:04 crc kubenswrapper[4688]: I1125 12:37:04.042141 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" event={"ID":"567fcefc-5ba1-449d-959c-3209a8d586a9","Type":"ContainerDied","Data":"0219af0f0d8b05f548e84843444e5a235b344e17432730d18ba344d163de8c9e"} Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.520819 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.599644 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zf2b\" (UniqueName: \"kubernetes.io/projected/567fcefc-5ba1-449d-959c-3209a8d586a9-kube-api-access-5zf2b\") pod \"567fcefc-5ba1-449d-959c-3209a8d586a9\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.600192 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-ssh-key\") pod \"567fcefc-5ba1-449d-959c-3209a8d586a9\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.600264 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-inventory\") pod \"567fcefc-5ba1-449d-959c-3209a8d586a9\" (UID: \"567fcefc-5ba1-449d-959c-3209a8d586a9\") " Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.604871 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567fcefc-5ba1-449d-959c-3209a8d586a9-kube-api-access-5zf2b" (OuterVolumeSpecName: "kube-api-access-5zf2b") pod "567fcefc-5ba1-449d-959c-3209a8d586a9" (UID: "567fcefc-5ba1-449d-959c-3209a8d586a9"). InnerVolumeSpecName "kube-api-access-5zf2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.628609 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "567fcefc-5ba1-449d-959c-3209a8d586a9" (UID: "567fcefc-5ba1-449d-959c-3209a8d586a9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.630856 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-inventory" (OuterVolumeSpecName: "inventory") pod "567fcefc-5ba1-449d-959c-3209a8d586a9" (UID: "567fcefc-5ba1-449d-959c-3209a8d586a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.702289 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zf2b\" (UniqueName: \"kubernetes.io/projected/567fcefc-5ba1-449d-959c-3209a8d586a9-kube-api-access-5zf2b\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.702332 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:05 crc kubenswrapper[4688]: I1125 12:37:05.702348 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/567fcefc-5ba1-449d-959c-3209a8d586a9-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.062413 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" event={"ID":"567fcefc-5ba1-449d-959c-3209a8d586a9","Type":"ContainerDied","Data":"778ba1e233c07e6726a46954d9e3f7aeba09d17d2bc8a6e6a9c5e4f4be8f8d5a"} Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.062464 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="778ba1e233c07e6726a46954d9e3f7aeba09d17d2bc8a6e6a9c5e4f4be8f8d5a" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.062548 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8b8pz" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.130231 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2"] Nov 25 12:37:06 crc kubenswrapper[4688]: E1125 12:37:06.130691 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567fcefc-5ba1-449d-959c-3209a8d586a9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.130715 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="567fcefc-5ba1-449d-959c-3209a8d586a9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.130939 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="567fcefc-5ba1-449d-959c-3209a8d586a9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.131618 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.133781 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.133794 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.133993 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.134035 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.157047 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2"] Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.212451 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.212671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.212964 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.213148 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cd7d\" (UniqueName: \"kubernetes.io/projected/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-kube-api-access-4cd7d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.315329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cd7d\" (UniqueName: \"kubernetes.io/projected/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-kube-api-access-4cd7d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.315472 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.315515 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.315626 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.339615 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.342063 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.342459 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.345742 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cd7d\" (UniqueName: \"kubernetes.io/projected/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-kube-api-access-4cd7d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.447499 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:37:06 crc kubenswrapper[4688]: I1125 12:37:06.962825 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2"] Nov 25 12:37:07 crc kubenswrapper[4688]: I1125 12:37:07.073282 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" event={"ID":"9b744290-1dac-4fcf-99d7-6a4a7b2287f6","Type":"ContainerStarted","Data":"174943e13d8c7d5a68b94f351e71468ba2d886be2445cbf70981da73efc4c17d"} Nov 25 12:37:08 crc kubenswrapper[4688]: I1125 12:37:08.091548 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" event={"ID":"9b744290-1dac-4fcf-99d7-6a4a7b2287f6","Type":"ContainerStarted","Data":"e4b99662683e41ed067205bef990028c1596404eab48856b79dc86f452e487c1"} Nov 25 12:37:08 crc kubenswrapper[4688]: I1125 12:37:08.119999 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" podStartSLOduration=1.494932686 podStartE2EDuration="2.11997258s" podCreationTimestamp="2025-11-25 12:37:06 +0000 UTC" firstStartedPulling="2025-11-25 12:37:06.96619972 +0000 UTC m=+1377.075828588" lastFinishedPulling="2025-11-25 12:37:07.591239584 +0000 UTC m=+1377.700868482" observedRunningTime="2025-11-25 12:37:08.106954015 +0000 UTC m=+1378.216582903" watchObservedRunningTime="2025-11-25 12:37:08.11997258 +0000 UTC m=+1378.229601458" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.073079 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.075276 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.078300 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.078587 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.082843 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.190088 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.190287 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.291747 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.291864 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.291932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.311504 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.408941 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:10 crc kubenswrapper[4688]: I1125 12:37:10.847129 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 12:37:11 crc kubenswrapper[4688]: I1125 12:37:11.117468 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b5335110-ea89-4bbf-8bbc-fb6c563f43db","Type":"ContainerStarted","Data":"cad90fe1dbefc1c8f1af2f51b314ec0c6b4dcf4aa8c34a792e6194baec99839f"} Nov 25 12:37:12 crc kubenswrapper[4688]: I1125 12:37:12.138649 4688 generic.go:334] "Generic (PLEG): container finished" podID="b5335110-ea89-4bbf-8bbc-fb6c563f43db" containerID="6416f1a4575d7e115ab0b7bbbb4787713a13297ad6dca443adc64be0d67bba63" exitCode=0 Nov 25 12:37:12 crc kubenswrapper[4688]: I1125 12:37:12.138739 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b5335110-ea89-4bbf-8bbc-fb6c563f43db","Type":"ContainerDied","Data":"6416f1a4575d7e115ab0b7bbbb4787713a13297ad6dca443adc64be0d67bba63"} Nov 25 12:37:13 crc kubenswrapper[4688]: I1125 12:37:13.469440 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:13 crc kubenswrapper[4688]: I1125 12:37:13.550108 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kubelet-dir\") pod \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\" (UID: \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\") " Nov 25 12:37:13 crc kubenswrapper[4688]: I1125 12:37:13.550238 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5335110-ea89-4bbf-8bbc-fb6c563f43db" (UID: "b5335110-ea89-4bbf-8bbc-fb6c563f43db"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:37:13 crc kubenswrapper[4688]: I1125 12:37:13.550269 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kube-api-access\") pod \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\" (UID: \"b5335110-ea89-4bbf-8bbc-fb6c563f43db\") " Nov 25 12:37:13 crc kubenswrapper[4688]: I1125 12:37:13.550706 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:13 crc kubenswrapper[4688]: I1125 12:37:13.555849 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5335110-ea89-4bbf-8bbc-fb6c563f43db" (UID: "b5335110-ea89-4bbf-8bbc-fb6c563f43db"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:37:13 crc kubenswrapper[4688]: I1125 12:37:13.651951 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5335110-ea89-4bbf-8bbc-fb6c563f43db-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:14 crc kubenswrapper[4688]: I1125 12:37:14.157688 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b5335110-ea89-4bbf-8bbc-fb6c563f43db","Type":"ContainerDied","Data":"cad90fe1dbefc1c8f1af2f51b314ec0c6b4dcf4aa8c34a792e6194baec99839f"} Nov 25 12:37:14 crc kubenswrapper[4688]: I1125 12:37:14.157727 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cad90fe1dbefc1c8f1af2f51b314ec0c6b4dcf4aa8c34a792e6194baec99839f" Nov 25 12:37:14 crc kubenswrapper[4688]: I1125 12:37:14.157829 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.070162 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 12:37:17 crc kubenswrapper[4688]: E1125 12:37:17.071657 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5335110-ea89-4bbf-8bbc-fb6c563f43db" containerName="pruner" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.071682 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5335110-ea89-4bbf-8bbc-fb6c563f43db" containerName="pruner" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.072234 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5335110-ea89-4bbf-8bbc-fb6c563f43db" containerName="pruner" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.073038 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.080727 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.080899 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.083880 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.129763 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kube-api-access\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.130043 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-var-lock\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.130204 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kubelet-dir\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.232834 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kube-api-access\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.232905 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-var-lock\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.233014 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kubelet-dir\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.233165 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-var-lock\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.233186 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kubelet-dir\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.251574 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kube-api-access\") pod \"installer-9-crc\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.395789 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.854680 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.855182 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:37:17 crc kubenswrapper[4688]: I1125 12:37:17.893898 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 12:37:18 crc kubenswrapper[4688]: I1125 12:37:18.192175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62","Type":"ContainerStarted","Data":"f118e6005001f0ca0f3d4ede3b0cee1fea66cf87ac6a271a4cf982096ce26150"} Nov 25 12:37:19 crc kubenswrapper[4688]: I1125 12:37:19.202431 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62","Type":"ContainerStarted","Data":"0d9e8d5c8c3c8074b693bc4eea038281aa46293ff0d8c0a26dd9ac32031e05bd"} Nov 25 12:37:19 crc kubenswrapper[4688]: I1125 12:37:19.218125 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.218104094 podStartE2EDuration="2.218104094s" podCreationTimestamp="2025-11-25 12:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:37:19.216784449 +0000 UTC m=+1389.326413317" watchObservedRunningTime="2025-11-25 12:37:19.218104094 +0000 UTC m=+1389.327732962" Nov 25 12:37:47 crc kubenswrapper[4688]: I1125 12:37:47.854293 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:37:47 crc kubenswrapper[4688]: I1125 12:37:47.854861 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.068375 4688 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.070020 4688 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.070174 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.070365 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba" gracePeriod=15 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.070515 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda" gracePeriod=15 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.070569 4688 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.070584 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08" gracePeriod=15 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.070516 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8" gracePeriod=15 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.070562 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec" gracePeriod=15 Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.071270 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071304 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.071319 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071329 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.071344 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071352 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.071376 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071383 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.071405 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071414 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.071430 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071439 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.071452 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071460 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071715 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071733 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071750 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071765 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071778 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071789 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.071801 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.072075 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.072090 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.076182 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.132595 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.132738 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.132806 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.132952 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.133114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.133320 4688 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.159:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.133378 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.133783 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.133861 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238020 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238367 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238415 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238447 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238461 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238503 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238557 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238192 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238621 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238680 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238722 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238828 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238930 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.238948 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.239013 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.239121 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.435225 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:56 crc kubenswrapper[4688]: E1125 12:37:56.480266 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.159:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b40353ef7314d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 12:37:56.479512909 +0000 UTC m=+1426.589141797,LastTimestamp:2025-11-25 12:37:56.479512909 +0000 UTC m=+1426.589141797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.549177 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c31fb76176231fc8284367bf529a1c61b8b1a562a4df47e237679f31c865bd61"} Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.552829 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.554508 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.555843 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8" exitCode=0 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.555864 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda" exitCode=0 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.555897 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08" exitCode=0 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.555906 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec" exitCode=2 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.555940 4688 scope.go:117] "RemoveContainer" containerID="4d203ecf3fdfff765cf32442e0a49d2949ec17ff2b29624d5ec67254db8e1c67" Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.557790 4688 generic.go:334] "Generic (PLEG): container finished" podID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" containerID="0d9e8d5c8c3c8074b693bc4eea038281aa46293ff0d8c0a26dd9ac32031e05bd" exitCode=0 Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.557838 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62","Type":"ContainerDied","Data":"0d9e8d5c8c3c8074b693bc4eea038281aa46293ff0d8c0a26dd9ac32031e05bd"} Nov 25 12:37:56 crc kubenswrapper[4688]: I1125 12:37:56.559384 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:57 crc kubenswrapper[4688]: I1125 12:37:57.570952 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"57dde6a7c4f85cf4c3e2af6ab924179275e8ede39251ec5a8f2f42561f4f95b1"} Nov 25 12:37:57 crc kubenswrapper[4688]: E1125 12:37:57.572924 4688 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.159:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:57 crc kubenswrapper[4688]: I1125 12:37:57.573222 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:57 crc kubenswrapper[4688]: I1125 12:37:57.574956 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 12:37:57 crc kubenswrapper[4688]: I1125 12:37:57.936043 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:57 crc kubenswrapper[4688]: I1125 12:37:57.937108 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.071328 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kubelet-dir\") pod \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.071398 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-var-lock\") pod \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.071463 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" (UID: "31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.071607 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-var-lock" (OuterVolumeSpecName: "var-lock") pod "31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" (UID: "31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.071718 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kube-api-access\") pod \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\" (UID: \"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62\") " Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.072251 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.072277 4688 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.078773 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" (UID: "31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.268956 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.584002 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.584974 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.585868 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.586092 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.586097 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62","Type":"ContainerDied","Data":"f118e6005001f0ca0f3d4ede3b0cee1fea66cf87ac6a271a4cf982096ce26150"} Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.586154 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f118e6005001f0ca0f3d4ede3b0cee1fea66cf87ac6a271a4cf982096ce26150" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.586339 4688 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.588491 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.589341 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba" exitCode=0 Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.589411 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.589470 4688 scope.go:117] "RemoveContainer" containerID="ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8" Nov 25 12:37:58 crc kubenswrapper[4688]: E1125 12:37:58.590022 4688 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.159:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.599283 4688 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.599716 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.608882 4688 scope.go:117] "RemoveContainer" containerID="a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.626942 4688 scope.go:117] "RemoveContainer" containerID="27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.651053 4688 scope.go:117] "RemoveContainer" containerID="c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.670574 4688 scope.go:117] "RemoveContainer" containerID="fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.690775 4688 scope.go:117] "RemoveContainer" containerID="23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.710660 4688 scope.go:117] "RemoveContainer" containerID="ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8" Nov 25 12:37:58 crc kubenswrapper[4688]: E1125 12:37:58.711147 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\": container with ID starting with ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8 not found: ID does not exist" containerID="ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.711194 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8"} err="failed to get container status \"ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\": rpc error: code = NotFound desc = could not find container \"ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8\": container with ID starting with ded207fc144b25092d237a8d13f39a5f4fd1d1bcbd881dbd9b416cc7339863a8 not found: ID does not exist" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.711220 4688 scope.go:117] "RemoveContainer" containerID="a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda" Nov 25 12:37:58 crc kubenswrapper[4688]: E1125 12:37:58.711807 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\": container with ID starting with a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda not found: ID does not exist" containerID="a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.711853 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda"} err="failed to get container status \"a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\": rpc error: code = NotFound desc = could not find container \"a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda\": container with ID starting with a793c9c3da1de525cfcbaf6e9fda2eb9134dc578e2d3e439c52067f857816cda not found: ID does not exist" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.711881 4688 scope.go:117] "RemoveContainer" containerID="27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08" Nov 25 12:37:58 crc kubenswrapper[4688]: E1125 12:37:58.712204 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\": container with ID starting with 27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08 not found: ID does not exist" containerID="27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.712244 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08"} err="failed to get container status \"27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\": rpc error: code = NotFound desc = could not find container \"27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08\": container with ID starting with 27f0fedfd38bf1946468c7d367c1fc13d58ab05f446e5139943aef3a696e6c08 not found: ID does not exist" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.712269 4688 scope.go:117] "RemoveContainer" containerID="c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec" Nov 25 12:37:58 crc kubenswrapper[4688]: E1125 12:37:58.712563 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\": container with ID starting with c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec not found: ID does not exist" containerID="c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.712596 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec"} err="failed to get container status \"c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\": rpc error: code = NotFound desc = could not find container \"c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec\": container with ID starting with c37f9040c17ffafc7d45054fcf3ff183745132c1ef899f6f2e435ff5823bcdec not found: ID does not exist" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.712617 4688 scope.go:117] "RemoveContainer" containerID="fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba" Nov 25 12:37:58 crc kubenswrapper[4688]: E1125 12:37:58.712870 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\": container with ID starting with fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba not found: ID does not exist" containerID="fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.712900 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba"} err="failed to get container status \"fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\": rpc error: code = NotFound desc = could not find container \"fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba\": container with ID starting with fe16d004439b88506a474cd81473a3c8dc49cb8a34d18f5059d3e878e06b25ba not found: ID does not exist" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.712917 4688 scope.go:117] "RemoveContainer" containerID="23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173" Nov 25 12:37:58 crc kubenswrapper[4688]: E1125 12:37:58.713622 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\": container with ID starting with 23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173 not found: ID does not exist" containerID="23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.713652 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173"} err="failed to get container status \"23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\": rpc error: code = NotFound desc = could not find container \"23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173\": container with ID starting with 23d57e3857256401d56d9a15f1dc1ec464c48fb7d0f5e3d300ec5a7f88a38173 not found: ID does not exist" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.777219 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.777276 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.777337 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.777377 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.777388 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.777501 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.778779 4688 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.778807 4688 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.778820 4688 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.904424 4688 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:58 crc kubenswrapper[4688]: I1125 12:37:58.904692 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:59 crc kubenswrapper[4688]: E1125 12:37:59.459872 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:59 crc kubenswrapper[4688]: E1125 12:37:59.460437 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:59 crc kubenswrapper[4688]: E1125 12:37:59.460679 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:59 crc kubenswrapper[4688]: E1125 12:37:59.460859 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:59 crc kubenswrapper[4688]: E1125 12:37:59.461035 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:37:59 crc kubenswrapper[4688]: I1125 12:37:59.461060 4688 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 12:37:59 crc kubenswrapper[4688]: E1125 12:37:59.461246 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="200ms" Nov 25 12:37:59 crc kubenswrapper[4688]: E1125 12:37:59.657870 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.159:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b40353ef7314d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 12:37:56.479512909 +0000 UTC m=+1426.589141797,LastTimestamp:2025-11-25 12:37:56.479512909 +0000 UTC m=+1426.589141797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 12:37:59 crc kubenswrapper[4688]: E1125 12:37:59.662446 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="400ms" Nov 25 12:38:00 crc kubenswrapper[4688]: E1125 12:38:00.063958 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="800ms" Nov 25 12:38:00 crc kubenswrapper[4688]: I1125 12:38:00.752150 4688 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:00 crc kubenswrapper[4688]: I1125 12:38:00.752800 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:00 crc kubenswrapper[4688]: I1125 12:38:00.753794 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 12:38:00 crc kubenswrapper[4688]: E1125 12:38:00.865381 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="1.6s" Nov 25 12:38:01 crc kubenswrapper[4688]: E1125 12:38:01.828756 4688 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.159:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-knl2c" volumeName="registry-storage" Nov 25 12:38:02 crc kubenswrapper[4688]: E1125 12:38:02.466105 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="3.2s" Nov 25 12:38:03 crc kubenswrapper[4688]: I1125 12:38:03.820586 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="95c2802c-7143-4d63-8959-434c04453333" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 12:38:05 crc kubenswrapper[4688]: E1125 12:38:05.667873 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="6.4s" Nov 25 12:38:06 crc kubenswrapper[4688]: I1125 12:38:06.689938 4688 generic.go:334] "Generic (PLEG): container finished" podID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" containerID="1cc79231e22c48243f924a28ac4a1809fc2e31145a2ca4c333619082ed800858" exitCode=1 Nov 25 12:38:06 crc kubenswrapper[4688]: I1125 12:38:06.690097 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" event={"ID":"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3","Type":"ContainerDied","Data":"1cc79231e22c48243f924a28ac4a1809fc2e31145a2ca4c333619082ed800858"} Nov 25 12:38:06 crc kubenswrapper[4688]: I1125 12:38:06.690907 4688 scope.go:117] "RemoveContainer" containerID="1cc79231e22c48243f924a28ac4a1809fc2e31145a2ca4c333619082ed800858" Nov 25 12:38:06 crc kubenswrapper[4688]: I1125 12:38:06.691315 4688 status_manager.go:851] "Failed to get status for pod" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-744bc4ddc8-58c5m\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:06 crc kubenswrapper[4688]: I1125 12:38:06.692474 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:07 crc kubenswrapper[4688]: I1125 12:38:07.700654 4688 generic.go:334] "Generic (PLEG): container finished" podID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" containerID="a5b7b05aeb801bfe159c9bf755f247fc3537203688d95edb7a84d1b18c96acd8" exitCode=1 Nov 25 12:38:07 crc kubenswrapper[4688]: I1125 12:38:07.700704 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" event={"ID":"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3","Type":"ContainerDied","Data":"a5b7b05aeb801bfe159c9bf755f247fc3537203688d95edb7a84d1b18c96acd8"} Nov 25 12:38:07 crc kubenswrapper[4688]: I1125 12:38:07.701061 4688 scope.go:117] "RemoveContainer" containerID="1cc79231e22c48243f924a28ac4a1809fc2e31145a2ca4c333619082ed800858" Nov 25 12:38:07 crc kubenswrapper[4688]: I1125 12:38:07.701944 4688 status_manager.go:851] "Failed to get status for pod" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-744bc4ddc8-58c5m\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:07 crc kubenswrapper[4688]: I1125 12:38:07.702143 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:07 crc kubenswrapper[4688]: I1125 12:38:07.703309 4688 scope.go:117] "RemoveContainer" containerID="a5b7b05aeb801bfe159c9bf755f247fc3537203688d95edb7a84d1b18c96acd8" Nov 25 12:38:07 crc kubenswrapper[4688]: E1125 12:38:07.703709 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-744bc4ddc8-58c5m_metallb-system(d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3)\"" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" Nov 25 12:38:09 crc kubenswrapper[4688]: E1125 12:38:09.659623 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.159:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b40353ef7314d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 12:37:56.479512909 +0000 UTC m=+1426.589141797,LastTimestamp:2025-11-25 12:37:56.479512909 +0000 UTC m=+1426.589141797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 12:38:09 crc kubenswrapper[4688]: I1125 12:38:09.724078 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 12:38:09 crc kubenswrapper[4688]: I1125 12:38:09.724140 4688 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958" exitCode=1 Nov 25 12:38:09 crc kubenswrapper[4688]: I1125 12:38:09.724178 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958"} Nov 25 12:38:09 crc kubenswrapper[4688]: I1125 12:38:09.725001 4688 scope.go:117] "RemoveContainer" containerID="317438d1c06b7f1ec654065130813bde1ff99fecd77561eb96501513547d3958" Nov 25 12:38:09 crc kubenswrapper[4688]: I1125 12:38:09.725578 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:09 crc kubenswrapper[4688]: I1125 12:38:09.727900 4688 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:09 crc kubenswrapper[4688]: I1125 12:38:09.728399 4688 status_manager.go:851] "Failed to get status for pod" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-744bc4ddc8-58c5m\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.737074 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.737403 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"46d161143a0c806c24d3b8ec03ffff9fa3dda9b8d8c5436441cc28b22c743a14"} Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.738450 4688 status_manager.go:851] "Failed to get status for pod" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-744bc4ddc8-58c5m\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.738825 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.739151 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.740044 4688 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.756663 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.756701 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:10 crc kubenswrapper[4688]: E1125 12:38:10.757146 4688 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.758013 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.758607 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.759009 4688 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.759564 4688 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.759953 4688 status_manager.go:851] "Failed to get status for pod" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-744bc4ddc8-58c5m\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.761798 4688 status_manager.go:851] "Failed to get status for pod" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-744bc4ddc8-58c5m\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.761991 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.762183 4688 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: I1125 12:38:10.762339 4688 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:10 crc kubenswrapper[4688]: W1125 12:38:10.791147 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-dffca5b2e8dff8fbc83a5a2f445c7505a80914f45649e1419470db46b4a6ea27 WatchSource:0}: Error finding container dffca5b2e8dff8fbc83a5a2f445c7505a80914f45649e1419470db46b4a6ea27: Status 404 returned error can't find the container with id dffca5b2e8dff8fbc83a5a2f445c7505a80914f45649e1419470db46b4a6ea27 Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.302504 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.303547 4688 scope.go:117] "RemoveContainer" containerID="a5b7b05aeb801bfe159c9bf755f247fc3537203688d95edb7a84d1b18c96acd8" Nov 25 12:38:11 crc kubenswrapper[4688]: E1125 12:38:11.303761 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-744bc4ddc8-58c5m_metallb-system(d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3)\"" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.303831 4688 status_manager.go:851] "Failed to get status for pod" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-744bc4ddc8-58c5m\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.304216 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.304723 4688 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.305038 4688 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.746902 4688 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d3899611513297377f613515f682467420d01b05b5dc5feada5d3efa76182d03" exitCode=0 Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.746952 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d3899611513297377f613515f682467420d01b05b5dc5feada5d3efa76182d03"} Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.747031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dffca5b2e8dff8fbc83a5a2f445c7505a80914f45649e1419470db46b4a6ea27"} Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.747456 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.747479 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.747867 4688 status_manager.go:851] "Failed to get status for pod" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-744bc4ddc8-58c5m\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:11 crc kubenswrapper[4688]: E1125 12:38:11.748097 4688 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.748119 4688 status_manager.go:851] "Failed to get status for pod" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.748409 4688 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:11 crc kubenswrapper[4688]: I1125 12:38:11.748668 4688 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.159:6443: connect: connection refused" Nov 25 12:38:12 crc kubenswrapper[4688]: E1125 12:38:12.069644 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.159:6443: connect: connection refused" interval="7s" Nov 25 12:38:12 crc kubenswrapper[4688]: I1125 12:38:12.768538 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"188527119aebeb6fe7141cc94e46a614d7136a004bb928fc6f867a54239c9d1a"} Nov 25 12:38:12 crc kubenswrapper[4688]: I1125 12:38:12.768889 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8966ee7e431ada5c7b51ad0206f286c1f8912ab02ed0146e0e8e4be906b54d2e"} Nov 25 12:38:12 crc kubenswrapper[4688]: I1125 12:38:12.768909 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c2918c8dc08bce8468ed6336190b2c3282c89043bb808eeec9721469477429f"} Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.050568 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.057963 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.756408 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="95c2802c-7143-4d63-8959-434c04453333" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.781961 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5143a0ccc2bdc0e90131ccf589b484b6da3c88b685997ae3154e4b65daae0a05"} Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.782288 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.782303 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.782311 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5ea51ccc2df8fe0b89495f1f8211c317f03dc56a00b3077c102efd2c9dfd5481"} Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.782165 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:13 crc kubenswrapper[4688]: I1125 12:38:13.782331 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:15 crc kubenswrapper[4688]: I1125 12:38:15.758201 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:15 crc kubenswrapper[4688]: I1125 12:38:15.758576 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:15 crc kubenswrapper[4688]: I1125 12:38:15.763562 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:16 crc kubenswrapper[4688]: I1125 12:38:16.811893 4688 generic.go:334] "Generic (PLEG): container finished" podID="7f63e16e-9d9b-4e1a-b497-1417e8e7b79e" containerID="785ee7ed2f591c9c7fa427eb3bb8bc2aa28ab24f4daf32e35cf42f60fe63300f" exitCode=1 Nov 25 12:38:16 crc kubenswrapper[4688]: I1125 12:38:16.812138 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" event={"ID":"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e","Type":"ContainerDied","Data":"785ee7ed2f591c9c7fa427eb3bb8bc2aa28ab24f4daf32e35cf42f60fe63300f"} Nov 25 12:38:16 crc kubenswrapper[4688]: I1125 12:38:16.813532 4688 scope.go:117] "RemoveContainer" containerID="785ee7ed2f591c9c7fa427eb3bb8bc2aa28ab24f4daf32e35cf42f60fe63300f" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.822290 4688 generic.go:334] "Generic (PLEG): container finished" podID="3f65195f-4002-4d44-a25c-3c2603ed14c6" containerID="ee1357dcb20670eee0f1daee0282ce8812681af9f3cc544c2cd12432649625d8" exitCode=1 Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.822364 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" event={"ID":"3f65195f-4002-4d44-a25c-3c2603ed14c6","Type":"ContainerDied","Data":"ee1357dcb20670eee0f1daee0282ce8812681af9f3cc544c2cd12432649625d8"} Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.823394 4688 scope.go:117] "RemoveContainer" containerID="ee1357dcb20670eee0f1daee0282ce8812681af9f3cc544c2cd12432649625d8" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.825245 4688 generic.go:334] "Generic (PLEG): container finished" podID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" containerID="a37bad09623ad3855f9033dd417d87217feb5d79669ad1895e182ef4002f3c35" exitCode=1 Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.825283 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" event={"ID":"3649a66a-709f-4b77-b798-e5f90eeb2e5d","Type":"ContainerDied","Data":"a37bad09623ad3855f9033dd417d87217feb5d79669ad1895e182ef4002f3c35"} Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.825666 4688 scope.go:117] "RemoveContainer" containerID="a37bad09623ad3855f9033dd417d87217feb5d79669ad1895e182ef4002f3c35" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.827348 4688 generic.go:334] "Generic (PLEG): container finished" podID="93553656-ef25-4318-81f1-a4e7f973ed38" containerID="5036a212d04e6e3526c7d54957449ee62139b0c2449e2dedcd8cfe17d0922dde" exitCode=1 Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.827390 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" event={"ID":"93553656-ef25-4318-81f1-a4e7f973ed38","Type":"ContainerDied","Data":"5036a212d04e6e3526c7d54957449ee62139b0c2449e2dedcd8cfe17d0922dde"} Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.827702 4688 scope.go:117] "RemoveContainer" containerID="5036a212d04e6e3526c7d54957449ee62139b0c2449e2dedcd8cfe17d0922dde" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.830781 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" event={"ID":"7f63e16e-9d9b-4e1a-b497-1417e8e7b79e","Type":"ContainerStarted","Data":"aa785af852dfbf18bbd385b9e06167f43228aaee9dcc378f3fa605d13e3d2d15"} Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.831292 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.834297 4688 generic.go:334] "Generic (PLEG): container finished" podID="d4c78fcc-139a-4485-8628-dc14422a4710" containerID="a45c96f62e19ea42b6e9a9c43b417b497a73b90ff0fd2b088377d433e4956f7e" exitCode=1 Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.834345 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" event={"ID":"d4c78fcc-139a-4485-8628-dc14422a4710","Type":"ContainerDied","Data":"a45c96f62e19ea42b6e9a9c43b417b497a73b90ff0fd2b088377d433e4956f7e"} Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.834767 4688 scope.go:117] "RemoveContainer" containerID="a45c96f62e19ea42b6e9a9c43b417b497a73b90ff0fd2b088377d433e4956f7e" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.836923 4688 generic.go:334] "Generic (PLEG): container finished" podID="ae188502-8c93-4a53-bb69-b9a964c82bc6" containerID="47dd0a5e4c13d85ee89c9f61e5a2017ba6807678b08260179adda75564a5b550" exitCode=1 Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.836941 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" event={"ID":"ae188502-8c93-4a53-bb69-b9a964c82bc6","Type":"ContainerDied","Data":"47dd0a5e4c13d85ee89c9f61e5a2017ba6807678b08260179adda75564a5b550"} Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.837225 4688 scope.go:117] "RemoveContainer" containerID="47dd0a5e4c13d85ee89c9f61e5a2017ba6807678b08260179adda75564a5b550" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.853659 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.853708 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.853749 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.854386 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"adac398a94564aa341b35f325f1f99096f13126fe72668e940408e0ca6a84914"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:38:17 crc kubenswrapper[4688]: I1125 12:38:17.854481 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://adac398a94564aa341b35f325f1f99096f13126fe72668e940408e0ca6a84914" gracePeriod=600 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.189446 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.449371 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.535469 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.583116 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.793599 4688 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.842587 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c7d92b34-d557-443a-8eab-5b912ed49497" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.851087 4688 generic.go:334] "Generic (PLEG): container finished" podID="6efe1c76-76a3-4c72-bb71-0963553bbb98" containerID="b4c9b13e4a677e9751d2d757bf43013184c9a662ac9fb84d2b850ff6a0653b3b" exitCode=1 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.851174 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" event={"ID":"6efe1c76-76a3-4c72-bb71-0963553bbb98","Type":"ContainerDied","Data":"b4c9b13e4a677e9751d2d757bf43013184c9a662ac9fb84d2b850ff6a0653b3b"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.852029 4688 scope.go:117] "RemoveContainer" containerID="b4c9b13e4a677e9751d2d757bf43013184c9a662ac9fb84d2b850ff6a0653b3b" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.857796 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="adac398a94564aa341b35f325f1f99096f13126fe72668e940408e0ca6a84914" exitCode=0 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.857887 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"adac398a94564aa341b35f325f1f99096f13126fe72668e940408e0ca6a84914"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.857940 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.857963 4688 scope.go:117] "RemoveContainer" containerID="606e3c186faf0a77643eeee31f20e3a41380a34fa45ab23f8be805001dd713d2" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.863335 4688 generic.go:334] "Generic (PLEG): container finished" podID="3f65195f-4002-4d44-a25c-3c2603ed14c6" containerID="8fcb21199572599de638da4bab301d2df24e292b1800125da62c1e6fb99e94af" exitCode=1 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.863393 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" event={"ID":"3f65195f-4002-4d44-a25c-3c2603ed14c6","Type":"ContainerDied","Data":"8fcb21199572599de638da4bab301d2df24e292b1800125da62c1e6fb99e94af"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.864085 4688 scope.go:117] "RemoveContainer" containerID="8fcb21199572599de638da4bab301d2df24e292b1800125da62c1e6fb99e94af" Nov 25 12:38:18 crc kubenswrapper[4688]: E1125 12:38:18.864318 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-c877c965-jptwb_openstack-operators(3f65195f-4002-4d44-a25c-3c2603ed14c6)\"" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" podUID="3f65195f-4002-4d44-a25c-3c2603ed14c6" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.875921 4688 generic.go:334] "Generic (PLEG): container finished" podID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" containerID="6550b7e2df2b22b3c044a909d54ab7e182c473df8b8c5b92328410d06422739d" exitCode=1 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.876629 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" event={"ID":"3649a66a-709f-4b77-b798-e5f90eeb2e5d","Type":"ContainerDied","Data":"6550b7e2df2b22b3c044a909d54ab7e182c473df8b8c5b92328410d06422739d"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.877305 4688 scope.go:117] "RemoveContainer" containerID="6550b7e2df2b22b3c044a909d54ab7e182c473df8b8c5b92328410d06422739d" Nov 25 12:38:18 crc kubenswrapper[4688]: E1125 12:38:18.877627 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-nxnmg_openstack-operators(3649a66a-709f-4b77-b798-e5f90eeb2e5d)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podUID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.880776 4688 generic.go:334] "Generic (PLEG): container finished" podID="592ea8b1-efc4-4027-a7dc-3943125fd935" containerID="d8b307a13d0e6e143fadcefc02052bc1e6b00372c4269800f25041addd60f0e7" exitCode=1 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.880833 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" event={"ID":"592ea8b1-efc4-4027-a7dc-3943125fd935","Type":"ContainerDied","Data":"d8b307a13d0e6e143fadcefc02052bc1e6b00372c4269800f25041addd60f0e7"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.881202 4688 scope.go:117] "RemoveContainer" containerID="d8b307a13d0e6e143fadcefc02052bc1e6b00372c4269800f25041addd60f0e7" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.889598 4688 generic.go:334] "Generic (PLEG): container finished" podID="93553656-ef25-4318-81f1-a4e7f973ed38" containerID="8417dc9bfb0b29f41ec6ee75bcd1ba3518a269509fa7de6cf91626cd8e1f2974" exitCode=1 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.889675 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" event={"ID":"93553656-ef25-4318-81f1-a4e7f973ed38","Type":"ContainerDied","Data":"8417dc9bfb0b29f41ec6ee75bcd1ba3518a269509fa7de6cf91626cd8e1f2974"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.890454 4688 scope.go:117] "RemoveContainer" containerID="8417dc9bfb0b29f41ec6ee75bcd1ba3518a269509fa7de6cf91626cd8e1f2974" Nov 25 12:38:18 crc kubenswrapper[4688]: E1125 12:38:18.890767 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-wf6w6_openstack-operators(93553656-ef25-4318-81f1-a4e7f973ed38)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" podUID="93553656-ef25-4318-81f1-a4e7f973ed38" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.900195 4688 generic.go:334] "Generic (PLEG): container finished" podID="d4c78fcc-139a-4485-8628-dc14422a4710" containerID="aa039b00a08ec05df72b6b8e976de7195f81ce39293da3c4305dd662b2df2964" exitCode=1 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.900271 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" event={"ID":"d4c78fcc-139a-4485-8628-dc14422a4710","Type":"ContainerDied","Data":"aa039b00a08ec05df72b6b8e976de7195f81ce39293da3c4305dd662b2df2964"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.903300 4688 scope.go:117] "RemoveContainer" containerID="aa039b00a08ec05df72b6b8e976de7195f81ce39293da3c4305dd662b2df2964" Nov 25 12:38:18 crc kubenswrapper[4688]: E1125 12:38:18.903751 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-c76gt_openstack-operators(d4c78fcc-139a-4485-8628-dc14422a4710)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podUID="d4c78fcc-139a-4485-8628-dc14422a4710" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.905560 4688 scope.go:117] "RemoveContainer" containerID="ee1357dcb20670eee0f1daee0282ce8812681af9f3cc544c2cd12432649625d8" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.908111 4688 generic.go:334] "Generic (PLEG): container finished" podID="ae188502-8c93-4a53-bb69-b9a964c82bc6" containerID="b431561d94048b029026382ca01b77e1b1bc1c235b999c22c83953cc336d5ee9" exitCode=1 Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.908659 4688 scope.go:117] "RemoveContainer" containerID="b431561d94048b029026382ca01b77e1b1bc1c235b999c22c83953cc336d5ee9" Nov 25 12:38:18 crc kubenswrapper[4688]: E1125 12:38:18.909051 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-gf8vv_openstack-operators(ae188502-8c93-4a53-bb69-b9a964c82bc6)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" podUID="ae188502-8c93-4a53-bb69-b9a964c82bc6" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.909245 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" event={"ID":"ae188502-8c93-4a53-bb69-b9a964c82bc6","Type":"ContainerDied","Data":"b431561d94048b029026382ca01b77e1b1bc1c235b999c22c83953cc336d5ee9"} Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.910745 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.910789 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:18 crc kubenswrapper[4688]: I1125 12:38:18.915450 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.213821 4688 scope.go:117] "RemoveContainer" containerID="a37bad09623ad3855f9033dd417d87217feb5d79669ad1895e182ef4002f3c35" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.303620 4688 scope.go:117] "RemoveContainer" containerID="5036a212d04e6e3526c7d54957449ee62139b0c2449e2dedcd8cfe17d0922dde" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.393087 4688 scope.go:117] "RemoveContainer" containerID="a45c96f62e19ea42b6e9a9c43b417b497a73b90ff0fd2b088377d433e4956f7e" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.460475 4688 scope.go:117] "RemoveContainer" containerID="47dd0a5e4c13d85ee89c9f61e5a2017ba6807678b08260179adda75564a5b550" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.485464 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" podUID="34cf6884-a630-417d-81ff-08c5ff19be31" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.485665 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" podUID="34cf6884-a630-417d-81ff-08c5ff19be31" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": dial tcp 10.217.0.71:8081: connect: connection refused" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.921206 4688 scope.go:117] "RemoveContainer" containerID="8fcb21199572599de638da4bab301d2df24e292b1800125da62c1e6fb99e94af" Nov 25 12:38:19 crc kubenswrapper[4688]: E1125 12:38:19.921946 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-c877c965-jptwb_openstack-operators(3f65195f-4002-4d44-a25c-3c2603ed14c6)\"" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" podUID="3f65195f-4002-4d44-a25c-3c2603ed14c6" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.923282 4688 scope.go:117] "RemoveContainer" containerID="6550b7e2df2b22b3c044a909d54ab7e182c473df8b8c5b92328410d06422739d" Nov 25 12:38:19 crc kubenswrapper[4688]: E1125 12:38:19.923639 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-nxnmg_openstack-operators(3649a66a-709f-4b77-b798-e5f90eeb2e5d)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podUID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.928368 4688 generic.go:334] "Generic (PLEG): container finished" podID="592ea8b1-efc4-4027-a7dc-3943125fd935" containerID="6eb7cd25e8c9d67bb4798a378ac7c29959b85c81d07162ab3299db95556e38b5" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.928444 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" event={"ID":"592ea8b1-efc4-4027-a7dc-3943125fd935","Type":"ContainerDied","Data":"6eb7cd25e8c9d67bb4798a378ac7c29959b85c81d07162ab3299db95556e38b5"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.928489 4688 scope.go:117] "RemoveContainer" containerID="d8b307a13d0e6e143fadcefc02052bc1e6b00372c4269800f25041addd60f0e7" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.929086 4688 scope.go:117] "RemoveContainer" containerID="6eb7cd25e8c9d67bb4798a378ac7c29959b85c81d07162ab3299db95556e38b5" Nov 25 12:38:19 crc kubenswrapper[4688]: E1125 12:38:19.929364 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-9qfpp_openstack-operators(592ea8b1-efc4-4027-a7dc-3943125fd935)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" podUID="592ea8b1-efc4-4027-a7dc-3943125fd935" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.930616 4688 generic.go:334] "Generic (PLEG): container finished" podID="6efa691a-9f05-4d6a-8517-cba5b00426cd" containerID="bec070dbf35f6a57db847bf4f9a0ca580e6451ba49f64d187e876839779b36c8" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.930662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" event={"ID":"6efa691a-9f05-4d6a-8517-cba5b00426cd","Type":"ContainerDied","Data":"bec070dbf35f6a57db847bf4f9a0ca580e6451ba49f64d187e876839779b36c8"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.931234 4688 scope.go:117] "RemoveContainer" containerID="bec070dbf35f6a57db847bf4f9a0ca580e6451ba49f64d187e876839779b36c8" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.933424 4688 generic.go:334] "Generic (PLEG): container finished" podID="7cd9dc7e-be06-416a-aebe-c0b160c79697" containerID="47235787b9c64077add0ac3ab974e741b605a5de991b3fbe1d5abb03e9d7f3b0" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.933490 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" event={"ID":"7cd9dc7e-be06-416a-aebe-c0b160c79697","Type":"ContainerDied","Data":"47235787b9c64077add0ac3ab974e741b605a5de991b3fbe1d5abb03e9d7f3b0"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.934163 4688 scope.go:117] "RemoveContainer" containerID="47235787b9c64077add0ac3ab974e741b605a5de991b3fbe1d5abb03e9d7f3b0" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.944756 4688 scope.go:117] "RemoveContainer" containerID="aa039b00a08ec05df72b6b8e976de7195f81ce39293da3c4305dd662b2df2964" Nov 25 12:38:19 crc kubenswrapper[4688]: E1125 12:38:19.945700 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-c76gt_openstack-operators(d4c78fcc-139a-4485-8628-dc14422a4710)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podUID="d4c78fcc-139a-4485-8628-dc14422a4710" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.948463 4688 scope.go:117] "RemoveContainer" containerID="b431561d94048b029026382ca01b77e1b1bc1c235b999c22c83953cc336d5ee9" Nov 25 12:38:19 crc kubenswrapper[4688]: E1125 12:38:19.948914 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-gf8vv_openstack-operators(ae188502-8c93-4a53-bb69-b9a964c82bc6)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" podUID="ae188502-8c93-4a53-bb69-b9a964c82bc6" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.952458 4688 generic.go:334] "Generic (PLEG): container finished" podID="1364865a-3285-428d-b672-064400c43c94" containerID="51a6d1ff223cb249707f254d60930719ad6b47c08f8eab629ccdc4f705b4a95c" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.952535 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" event={"ID":"1364865a-3285-428d-b672-064400c43c94","Type":"ContainerDied","Data":"51a6d1ff223cb249707f254d60930719ad6b47c08f8eab629ccdc4f705b4a95c"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.954200 4688 scope.go:117] "RemoveContainer" containerID="51a6d1ff223cb249707f254d60930719ad6b47c08f8eab629ccdc4f705b4a95c" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.956381 4688 generic.go:334] "Generic (PLEG): container finished" podID="34cf6884-a630-417d-81ff-08c5ff19be31" containerID="58009941fdf5ac373d610efb9126841c18d84942f03eef251a439162e15ded6e" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.956486 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" event={"ID":"34cf6884-a630-417d-81ff-08c5ff19be31","Type":"ContainerDied","Data":"58009941fdf5ac373d610efb9126841c18d84942f03eef251a439162e15ded6e"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.957366 4688 scope.go:117] "RemoveContainer" containerID="58009941fdf5ac373d610efb9126841c18d84942f03eef251a439162e15ded6e" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.959339 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" event={"ID":"92794534-2689-4fde-8597-4cc766d7b3b0","Type":"ContainerDied","Data":"c4cc97c220d8c958a6b84f7534e32fcef8309b4d4f4cb67ee017cac07d771c15"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.959281 4688 generic.go:334] "Generic (PLEG): container finished" podID="92794534-2689-4fde-8597-4cc766d7b3b0" containerID="c4cc97c220d8c958a6b84f7534e32fcef8309b4d4f4cb67ee017cac07d771c15" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.960839 4688 scope.go:117] "RemoveContainer" containerID="c4cc97c220d8c958a6b84f7534e32fcef8309b4d4f4cb67ee017cac07d771c15" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.963749 4688 generic.go:334] "Generic (PLEG): container finished" podID="6efe1c76-76a3-4c72-bb71-0963553bbb98" containerID="a78ec334f7a185bab3483093e94ec300a9fca3f527f7d466e207635b40d8fe6f" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.963798 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" event={"ID":"6efe1c76-76a3-4c72-bb71-0963553bbb98","Type":"ContainerDied","Data":"a78ec334f7a185bab3483093e94ec300a9fca3f527f7d466e207635b40d8fe6f"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.964701 4688 scope.go:117] "RemoveContainer" containerID="a78ec334f7a185bab3483093e94ec300a9fca3f527f7d466e207635b40d8fe6f" Nov 25 12:38:19 crc kubenswrapper[4688]: E1125 12:38:19.965002 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-q4ffj_openstack-operators(6efe1c76-76a3-4c72-bb71-0963553bbb98)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" podUID="6efe1c76-76a3-4c72-bb71-0963553bbb98" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.966640 4688 generic.go:334] "Generic (PLEG): container finished" podID="55967ae9-2dad-4d45-a8c3-bdaa483f9ea7" containerID="4200e3efdabaefebfe09c35bcda0ad4bc5ff7b0279d52668a2e11891f5270a30" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.966702 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" event={"ID":"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7","Type":"ContainerDied","Data":"4200e3efdabaefebfe09c35bcda0ad4bc5ff7b0279d52668a2e11891f5270a30"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.967070 4688 scope.go:117] "RemoveContainer" containerID="4200e3efdabaefebfe09c35bcda0ad4bc5ff7b0279d52668a2e11891f5270a30" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.983578 4688 scope.go:117] "RemoveContainer" containerID="b4c9b13e4a677e9751d2d757bf43013184c9a662ac9fb84d2b850ff6a0653b3b" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.989117 4688 generic.go:334] "Generic (PLEG): container finished" podID="acc9de1c-caf4-40f2-8e3c-470f1059599a" containerID="2002cdcb020df2653fd6d6a9849cea699bf658815227d356a9c66cf4d9dccb7f" exitCode=1 Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.989632 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.989647 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bd8d493b-63c6-4e36-9c7a-58f2ed57f378" Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.989725 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" event={"ID":"acc9de1c-caf4-40f2-8e3c-470f1059599a","Type":"ContainerDied","Data":"2002cdcb020df2653fd6d6a9849cea699bf658815227d356a9c66cf4d9dccb7f"} Nov 25 12:38:19 crc kubenswrapper[4688]: I1125 12:38:19.990575 4688 scope.go:117] "RemoveContainer" containerID="2002cdcb020df2653fd6d6a9849cea699bf658815227d356a9c66cf4d9dccb7f" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.000262 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" event={"ID":"34cf6884-a630-417d-81ff-08c5ff19be31","Type":"ContainerStarted","Data":"40dbb41f559f1ad8ae51c710d6e101aa0f1921727c3d982e6117046690b346f3"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.000743 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.002915 4688 generic.go:334] "Generic (PLEG): container finished" podID="6efa691a-9f05-4d6a-8517-cba5b00426cd" containerID="b7705f63b033a38f714165136cffb97ada13fba9cdd838b4c2d0805f7cd4a869" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.002984 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" event={"ID":"6efa691a-9f05-4d6a-8517-cba5b00426cd","Type":"ContainerDied","Data":"b7705f63b033a38f714165136cffb97ada13fba9cdd838b4c2d0805f7cd4a869"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.003020 4688 scope.go:117] "RemoveContainer" containerID="bec070dbf35f6a57db847bf4f9a0ca580e6451ba49f64d187e876839779b36c8" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.003790 4688 scope.go:117] "RemoveContainer" containerID="b7705f63b033a38f714165136cffb97ada13fba9cdd838b4c2d0805f7cd4a869" Nov 25 12:38:21 crc kubenswrapper[4688]: E1125 12:38:21.004109 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-4zlm5_openstack-operators(6efa691a-9f05-4d6a-8517-cba5b00426cd)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" podUID="6efa691a-9f05-4d6a-8517-cba5b00426cd" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.005230 4688 generic.go:334] "Generic (PLEG): container finished" podID="1364865a-3285-428d-b672-064400c43c94" containerID="6a4eec9df28bca5ddac3fd28d547788d0603a80aaf0830911613ad36bb1c5089" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.005266 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" event={"ID":"1364865a-3285-428d-b672-064400c43c94","Type":"ContainerDied","Data":"6a4eec9df28bca5ddac3fd28d547788d0603a80aaf0830911613ad36bb1c5089"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.005972 4688 scope.go:117] "RemoveContainer" containerID="6a4eec9df28bca5ddac3fd28d547788d0603a80aaf0830911613ad36bb1c5089" Nov 25 12:38:21 crc kubenswrapper[4688]: E1125 12:38:21.006219 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_openstack-operators(1364865a-3285-428d-b672-064400c43c94)\"" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" podUID="1364865a-3285-428d-b672-064400c43c94" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.007906 4688 generic.go:334] "Generic (PLEG): container finished" podID="0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d" containerID="33034155b4d90219d1d212d8afd5fa2da5c48cf220b2f004e28b897256b0a898" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.007949 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" event={"ID":"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d","Type":"ContainerDied","Data":"33034155b4d90219d1d212d8afd5fa2da5c48cf220b2f004e28b897256b0a898"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.008452 4688 scope.go:117] "RemoveContainer" containerID="33034155b4d90219d1d212d8afd5fa2da5c48cf220b2f004e28b897256b0a898" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.010929 4688 generic.go:334] "Generic (PLEG): container finished" podID="78451e33-7e86-4635-ac5f-d2c6a9ae6e71" containerID="255edb788a48d9e6dc4c6eaf16de423dfb1af513dbd467f1800503d66ccaabb4" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.010961 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" event={"ID":"78451e33-7e86-4635-ac5f-d2c6a9ae6e71","Type":"ContainerDied","Data":"255edb788a48d9e6dc4c6eaf16de423dfb1af513dbd467f1800503d66ccaabb4"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.011500 4688 scope.go:117] "RemoveContainer" containerID="255edb788a48d9e6dc4c6eaf16de423dfb1af513dbd467f1800503d66ccaabb4" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.014996 4688 generic.go:334] "Generic (PLEG): container finished" podID="92794534-2689-4fde-8597-4cc766d7b3b0" containerID="f56b13484908f0085350f81e9a31e27628177105102ab67e793422f7d064aa92" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.015081 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" event={"ID":"92794534-2689-4fde-8597-4cc766d7b3b0","Type":"ContainerDied","Data":"f56b13484908f0085350f81e9a31e27628177105102ab67e793422f7d064aa92"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.015771 4688 scope.go:117] "RemoveContainer" containerID="f56b13484908f0085350f81e9a31e27628177105102ab67e793422f7d064aa92" Nov 25 12:38:21 crc kubenswrapper[4688]: E1125 12:38:21.016064 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-b9jdn_openstack-operators(92794534-2689-4fde-8597-4cc766d7b3b0)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" podUID="92794534-2689-4fde-8597-4cc766d7b3b0" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.018689 4688 generic.go:334] "Generic (PLEG): container finished" podID="808a5b9f-95a2-4f58-abe2-30758a6a7e2a" containerID="839fedc23668dd7678a9fb69adb801aaa6ec538c909481e2cfd0db29fdd558ba" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.018814 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" event={"ID":"808a5b9f-95a2-4f58-abe2-30758a6a7e2a","Type":"ContainerDied","Data":"839fedc23668dd7678a9fb69adb801aaa6ec538c909481e2cfd0db29fdd558ba"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.019542 4688 scope.go:117] "RemoveContainer" containerID="839fedc23668dd7678a9fb69adb801aaa6ec538c909481e2cfd0db29fdd558ba" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.021324 4688 generic.go:334] "Generic (PLEG): container finished" podID="94f12846-9cbe-4997-9160-3545778ecfde" containerID="7d3f7f759665cc6decefab783094d0ed3a82679086716d699cfccfa5c2dfcb83" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.021404 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" event={"ID":"94f12846-9cbe-4997-9160-3545778ecfde","Type":"ContainerDied","Data":"7d3f7f759665cc6decefab783094d0ed3a82679086716d699cfccfa5c2dfcb83"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.022116 4688 scope.go:117] "RemoveContainer" containerID="7d3f7f759665cc6decefab783094d0ed3a82679086716d699cfccfa5c2dfcb83" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.028867 4688 generic.go:334] "Generic (PLEG): container finished" podID="55967ae9-2dad-4d45-a8c3-bdaa483f9ea7" containerID="64af99afe9e30e35287bb3f2a141bf024f76e8504a621eb2126b6be600b8b062" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.028918 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" event={"ID":"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7","Type":"ContainerDied","Data":"64af99afe9e30e35287bb3f2a141bf024f76e8504a621eb2126b6be600b8b062"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.029241 4688 scope.go:117] "RemoveContainer" containerID="64af99afe9e30e35287bb3f2a141bf024f76e8504a621eb2126b6be600b8b062" Nov 25 12:38:21 crc kubenswrapper[4688]: E1125 12:38:21.029467 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-zfvn2_openstack-operators(55967ae9-2dad-4d45-a8c3-bdaa483f9ea7)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" podUID="55967ae9-2dad-4d45-a8c3-bdaa483f9ea7" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.031501 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa49233e-de1b-4bea-85a6-de285e0e60f6" containerID="c3bf243bf178250c26ce4702aea298267ed145b15477e83d93d50d1fc8da3f37" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.031559 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" event={"ID":"fa49233e-de1b-4bea-85a6-de285e0e60f6","Type":"ContainerDied","Data":"c3bf243bf178250c26ce4702aea298267ed145b15477e83d93d50d1fc8da3f37"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.031863 4688 scope.go:117] "RemoveContainer" containerID="c3bf243bf178250c26ce4702aea298267ed145b15477e83d93d50d1fc8da3f37" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.037908 4688 generic.go:334] "Generic (PLEG): container finished" podID="e2f91df4-3b39-4c05-9fee-dd3f7622fd13" containerID="8760e49b2fabf2d2fdb55cd5daa32479972f5d903e950a0d2a5c9ddb9348c30c" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.037958 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" event={"ID":"e2f91df4-3b39-4c05-9fee-dd3f7622fd13","Type":"ContainerDied","Data":"8760e49b2fabf2d2fdb55cd5daa32479972f5d903e950a0d2a5c9ddb9348c30c"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.038383 4688 scope.go:117] "RemoveContainer" containerID="8760e49b2fabf2d2fdb55cd5daa32479972f5d903e950a0d2a5c9ddb9348c30c" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.041241 4688 generic.go:334] "Generic (PLEG): container finished" podID="87bbdcd1-48cf-4310-9131-93dadc55a0f1" containerID="c713eb87f5025c5a2ded355017c9ab4f6e343b330239bd8e3851dc3d4e158e1b" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.041325 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" event={"ID":"87bbdcd1-48cf-4310-9131-93dadc55a0f1","Type":"ContainerDied","Data":"c713eb87f5025c5a2ded355017c9ab4f6e343b330239bd8e3851dc3d4e158e1b"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.043007 4688 scope.go:117] "RemoveContainer" containerID="c713eb87f5025c5a2ded355017c9ab4f6e343b330239bd8e3851dc3d4e158e1b" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.048852 4688 generic.go:334] "Generic (PLEG): container finished" podID="5c7a1a6d-a3f3-4490-a6ba-f521535a1364" containerID="ad2f08198d77f4a91852dd214eeeeda7f7fc4add50b973d6e93f0c66a0227da9" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.048900 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" event={"ID":"5c7a1a6d-a3f3-4490-a6ba-f521535a1364","Type":"ContainerDied","Data":"ad2f08198d77f4a91852dd214eeeeda7f7fc4add50b973d6e93f0c66a0227da9"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.049494 4688 scope.go:117] "RemoveContainer" containerID="ad2f08198d77f4a91852dd214eeeeda7f7fc4add50b973d6e93f0c66a0227da9" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.052971 4688 generic.go:334] "Generic (PLEG): container finished" podID="59ac66df-a38a-4193-a6ff-fd4e74b1b113" containerID="a119ab5221d2689ea1e82bcc5ba063bdd7838b476a9af0446d42847b7d840913" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.053005 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" event={"ID":"59ac66df-a38a-4193-a6ff-fd4e74b1b113","Type":"ContainerDied","Data":"a119ab5221d2689ea1e82bcc5ba063bdd7838b476a9af0446d42847b7d840913"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.053595 4688 scope.go:117] "RemoveContainer" containerID="a119ab5221d2689ea1e82bcc5ba063bdd7838b476a9af0446d42847b7d840913" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.056279 4688 generic.go:334] "Generic (PLEG): container finished" podID="acc9de1c-caf4-40f2-8e3c-470f1059599a" containerID="c915f26a85ace737ca4764d385cbe218f64240865722462e72ef9a3ba8e731d2" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.056356 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" event={"ID":"acc9de1c-caf4-40f2-8e3c-470f1059599a","Type":"ContainerDied","Data":"c915f26a85ace737ca4764d385cbe218f64240865722462e72ef9a3ba8e731d2"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.057366 4688 scope.go:117] "RemoveContainer" containerID="c915f26a85ace737ca4764d385cbe218f64240865722462e72ef9a3ba8e731d2" Nov 25 12:38:21 crc kubenswrapper[4688]: E1125 12:38:21.057740 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-vkj6d_openstack-operators(acc9de1c-caf4-40f2-8e3c-470f1059599a)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" podUID="acc9de1c-caf4-40f2-8e3c-470f1059599a" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.061409 4688 generic.go:334] "Generic (PLEG): container finished" podID="7cd9dc7e-be06-416a-aebe-c0b160c79697" containerID="021d971e8a490d9c20a6d56ede93fb21b48fd3739bfd8bec49775edb8cfab9c9" exitCode=1 Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.061437 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" event={"ID":"7cd9dc7e-be06-416a-aebe-c0b160c79697","Type":"ContainerDied","Data":"021d971e8a490d9c20a6d56ede93fb21b48fd3739bfd8bec49775edb8cfab9c9"} Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.062406 4688 scope.go:117] "RemoveContainer" containerID="021d971e8a490d9c20a6d56ede93fb21b48fd3739bfd8bec49775edb8cfab9c9" Nov 25 12:38:21 crc kubenswrapper[4688]: E1125 12:38:21.063183 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-94snn_openstack-operators(7cd9dc7e-be06-416a-aebe-c0b160c79697)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" podUID="7cd9dc7e-be06-416a-aebe-c0b160c79697" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.084047 4688 scope.go:117] "RemoveContainer" containerID="51a6d1ff223cb249707f254d60930719ad6b47c08f8eab629ccdc4f705b4a95c" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.117606 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c7d92b34-d557-443a-8eab-5b912ed49497" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.268540 4688 scope.go:117] "RemoveContainer" containerID="c4cc97c220d8c958a6b84f7534e32fcef8309b4d4f4cb67ee017cac07d771c15" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.347171 4688 scope.go:117] "RemoveContainer" containerID="4200e3efdabaefebfe09c35bcda0ad4bc5ff7b0279d52668a2e11891f5270a30" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.400657 4688 scope.go:117] "RemoveContainer" containerID="2002cdcb020df2653fd6d6a9849cea699bf658815227d356a9c66cf4d9dccb7f" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.459705 4688 scope.go:117] "RemoveContainer" containerID="47235787b9c64077add0ac3ab974e741b605a5de991b3fbe1d5abb03e9d7f3b0" Nov 25 12:38:21 crc kubenswrapper[4688]: I1125 12:38:21.823488 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.076951 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa49233e-de1b-4bea-85a6-de285e0e60f6" containerID="851584e27500eb09e6c5c1e67f4cd7d7ea51388786cb21551fb6fa45e7b08965" exitCode=1 Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.076986 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" event={"ID":"fa49233e-de1b-4bea-85a6-de285e0e60f6","Type":"ContainerDied","Data":"851584e27500eb09e6c5c1e67f4cd7d7ea51388786cb21551fb6fa45e7b08965"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.077051 4688 scope.go:117] "RemoveContainer" containerID="c3bf243bf178250c26ce4702aea298267ed145b15477e83d93d50d1fc8da3f37" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.078029 4688 scope.go:117] "RemoveContainer" containerID="851584e27500eb09e6c5c1e67f4cd7d7ea51388786cb21551fb6fa45e7b08965" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.078594 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-gzslz_openstack-operators(fa49233e-de1b-4bea-85a6-de285e0e60f6)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" podUID="fa49233e-de1b-4bea-85a6-de285e0e60f6" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.080179 4688 generic.go:334] "Generic (PLEG): container finished" podID="808a5b9f-95a2-4f58-abe2-30758a6a7e2a" containerID="8d4ab18b91ceb000ff66278ec67553e973096e11afaa9fa688ed28fb7a85812e" exitCode=1 Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.080258 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" event={"ID":"808a5b9f-95a2-4f58-abe2-30758a6a7e2a","Type":"ContainerDied","Data":"8d4ab18b91ceb000ff66278ec67553e973096e11afaa9fa688ed28fb7a85812e"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.081072 4688 scope.go:117] "RemoveContainer" containerID="8d4ab18b91ceb000ff66278ec67553e973096e11afaa9fa688ed28fb7a85812e" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.081417 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-ltlms_openstack-operators(808a5b9f-95a2-4f58-abe2-30758a6a7e2a)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" podUID="808a5b9f-95a2-4f58-abe2-30758a6a7e2a" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.085231 4688 generic.go:334] "Generic (PLEG): container finished" podID="94f12846-9cbe-4997-9160-3545778ecfde" containerID="d27a0285ad30e553e8825248e2b404721e7875ba22b9fdd96dd8f7e3665f0d87" exitCode=1 Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.085275 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" event={"ID":"94f12846-9cbe-4997-9160-3545778ecfde","Type":"ContainerDied","Data":"d27a0285ad30e553e8825248e2b404721e7875ba22b9fdd96dd8f7e3665f0d87"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.086011 4688 scope.go:117] "RemoveContainer" containerID="d27a0285ad30e553e8825248e2b404721e7875ba22b9fdd96dd8f7e3665f0d87" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.086392 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-vcnvc_openstack-operators(94f12846-9cbe-4997-9160-3545778ecfde)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" podUID="94f12846-9cbe-4997-9160-3545778ecfde" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.093307 4688 generic.go:334] "Generic (PLEG): container finished" podID="78451e33-7e86-4635-ac5f-d2c6a9ae6e71" containerID="a641e915be23b75b2b7c668c132098b64d935b40da136e23321c561559d08127" exitCode=1 Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.093469 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" event={"ID":"78451e33-7e86-4635-ac5f-d2c6a9ae6e71","Type":"ContainerDied","Data":"a641e915be23b75b2b7c668c132098b64d935b40da136e23321c561559d08127"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.094184 4688 scope.go:117] "RemoveContainer" containerID="a641e915be23b75b2b7c668c132098b64d935b40da136e23321c561559d08127" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.094569 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-tn6tq_openstack-operators(78451e33-7e86-4635-ac5f-d2c6a9ae6e71)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" podUID="78451e33-7e86-4635-ac5f-d2c6a9ae6e71" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.101141 4688 generic.go:334] "Generic (PLEG): container finished" podID="5c7a1a6d-a3f3-4490-a6ba-f521535a1364" containerID="7505cd1ae6725945b67c38e1ec6be41c6f50618b20a4e0865c181c95ea479923" exitCode=1 Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.101267 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" event={"ID":"5c7a1a6d-a3f3-4490-a6ba-f521535a1364","Type":"ContainerDied","Data":"7505cd1ae6725945b67c38e1ec6be41c6f50618b20a4e0865c181c95ea479923"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.102204 4688 scope.go:117] "RemoveContainer" containerID="7505cd1ae6725945b67c38e1ec6be41c6f50618b20a4e0865c181c95ea479923" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.102453 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-2snng_openstack-operators(5c7a1a6d-a3f3-4490-a6ba-f521535a1364)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" podUID="5c7a1a6d-a3f3-4490-a6ba-f521535a1364" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.107307 4688 scope.go:117] "RemoveContainer" containerID="6a4eec9df28bca5ddac3fd28d547788d0603a80aaf0830911613ad36bb1c5089" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.107608 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_openstack-operators(1364865a-3285-428d-b672-064400c43c94)\"" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" podUID="1364865a-3285-428d-b672-064400c43c94" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.111397 4688 generic.go:334] "Generic (PLEG): container finished" podID="87bbdcd1-48cf-4310-9131-93dadc55a0f1" containerID="03a09b515b289be9eafcba1e515771d88940829b0e20c631145233febd4373e9" exitCode=1 Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.111476 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" event={"ID":"87bbdcd1-48cf-4310-9131-93dadc55a0f1","Type":"ContainerDied","Data":"03a09b515b289be9eafcba1e515771d88940829b0e20c631145233febd4373e9"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.112019 4688 scope.go:117] "RemoveContainer" containerID="03a09b515b289be9eafcba1e515771d88940829b0e20c631145233febd4373e9" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.112342 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-ptqrp_openstack-operators(87bbdcd1-48cf-4310-9131-93dadc55a0f1)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" podUID="87bbdcd1-48cf-4310-9131-93dadc55a0f1" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.115877 4688 generic.go:334] "Generic (PLEG): container finished" podID="0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d" containerID="dceeab541185b6140941770d8dab50a9e7e0a14a215f528b4b18fde69ccf574f" exitCode=1 Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.115933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" event={"ID":"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d","Type":"ContainerDied","Data":"dceeab541185b6140941770d8dab50a9e7e0a14a215f528b4b18fde69ccf574f"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.117854 4688 scope.go:117] "RemoveContainer" containerID="dceeab541185b6140941770d8dab50a9e7e0a14a215f528b4b18fde69ccf574f" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.118383 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-q2tdz_openstack-operators(0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" podUID="0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.123648 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" event={"ID":"59ac66df-a38a-4193-a6ff-fd4e74b1b113","Type":"ContainerStarted","Data":"2fe9657d95f9195da19f9f210f52de42088adee7b66d64e72767f9d19883f6b9"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.124015 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.132680 4688 generic.go:334] "Generic (PLEG): container finished" podID="e2f91df4-3b39-4c05-9fee-dd3f7622fd13" containerID="018f0a4b05d247b49ab45565f7b322653e48bfcf9634fccec3c0afd267247f7a" exitCode=1 Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.132748 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" event={"ID":"e2f91df4-3b39-4c05-9fee-dd3f7622fd13","Type":"ContainerDied","Data":"018f0a4b05d247b49ab45565f7b322653e48bfcf9634fccec3c0afd267247f7a"} Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.133344 4688 scope.go:117] "RemoveContainer" containerID="018f0a4b05d247b49ab45565f7b322653e48bfcf9634fccec3c0afd267247f7a" Nov 25 12:38:22 crc kubenswrapper[4688]: E1125 12:38:22.133591 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-kvt5r_openstack-operators(e2f91df4-3b39-4c05-9fee-dd3f7622fd13)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" podUID="e2f91df4-3b39-4c05-9fee-dd3f7622fd13" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.165090 4688 scope.go:117] "RemoveContainer" containerID="839fedc23668dd7678a9fb69adb801aaa6ec538c909481e2cfd0db29fdd558ba" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.237201 4688 scope.go:117] "RemoveContainer" containerID="7d3f7f759665cc6decefab783094d0ed3a82679086716d699cfccfa5c2dfcb83" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.324211 4688 scope.go:117] "RemoveContainer" containerID="255edb788a48d9e6dc4c6eaf16de423dfb1af513dbd467f1800503d66ccaabb4" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.403614 4688 scope.go:117] "RemoveContainer" containerID="ad2f08198d77f4a91852dd214eeeeda7f7fc4add50b973d6e93f0c66a0227da9" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.437142 4688 scope.go:117] "RemoveContainer" containerID="c713eb87f5025c5a2ded355017c9ab4f6e343b330239bd8e3851dc3d4e158e1b" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.468166 4688 scope.go:117] "RemoveContainer" containerID="33034155b4d90219d1d212d8afd5fa2da5c48cf220b2f004e28b897256b0a898" Nov 25 12:38:22 crc kubenswrapper[4688]: I1125 12:38:22.491414 4688 scope.go:117] "RemoveContainer" containerID="8760e49b2fabf2d2fdb55cd5daa32479972f5d903e950a0d2a5c9ddb9348c30c" Nov 25 12:38:23 crc kubenswrapper[4688]: I1125 12:38:23.739923 4688 scope.go:117] "RemoveContainer" containerID="a5b7b05aeb801bfe159c9bf755f247fc3537203688d95edb7a84d1b18c96acd8" Nov 25 12:38:23 crc kubenswrapper[4688]: I1125 12:38:23.747237 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="95c2802c-7143-4d63-8959-434c04453333" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 12:38:23 crc kubenswrapper[4688]: I1125 12:38:23.747317 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 12:38:23 crc kubenswrapper[4688]: I1125 12:38:23.747860 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"5c55a39d1d40ac2d31eb50e5d09324dfefde6c4db0d952bd95388f17e9cedca9"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Nov 25 12:38:23 crc kubenswrapper[4688]: I1125 12:38:23.747908 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="95c2802c-7143-4d63-8959-434c04453333" containerName="kube-state-metrics" containerID="cri-o://5c55a39d1d40ac2d31eb50e5d09324dfefde6c4db0d952bd95388f17e9cedca9" gracePeriod=30 Nov 25 12:38:24 crc kubenswrapper[4688]: I1125 12:38:24.174716 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" event={"ID":"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3","Type":"ContainerStarted","Data":"e437a3d3c6118fb17df2b434528cdbb5e7e1b688b3c5edff2e93d71b02e82d43"} Nov 25 12:38:24 crc kubenswrapper[4688]: I1125 12:38:24.175674 4688 scope.go:117] "RemoveContainer" containerID="e437a3d3c6118fb17df2b434528cdbb5e7e1b688b3c5edff2e93d71b02e82d43" Nov 25 12:38:24 crc kubenswrapper[4688]: E1125 12:38:24.176044 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-744bc4ddc8-58c5m_metallb-system(d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3)\"" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" Nov 25 12:38:24 crc kubenswrapper[4688]: I1125 12:38:24.182082 4688 generic.go:334] "Generic (PLEG): container finished" podID="95c2802c-7143-4d63-8959-434c04453333" containerID="5c55a39d1d40ac2d31eb50e5d09324dfefde6c4db0d952bd95388f17e9cedca9" exitCode=2 Nov 25 12:38:24 crc kubenswrapper[4688]: I1125 12:38:24.182124 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95c2802c-7143-4d63-8959-434c04453333","Type":"ContainerDied","Data":"5c55a39d1d40ac2d31eb50e5d09324dfefde6c4db0d952bd95388f17e9cedca9"} Nov 25 12:38:25 crc kubenswrapper[4688]: I1125 12:38:25.194621 4688 generic.go:334] "Generic (PLEG): container finished" podID="95c2802c-7143-4d63-8959-434c04453333" containerID="b564c9cb677fe65479cd33880a2657cd6d6e68eaf30c0e3bc372a7bc8abcefa2" exitCode=1 Nov 25 12:38:25 crc kubenswrapper[4688]: I1125 12:38:25.194712 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95c2802c-7143-4d63-8959-434c04453333","Type":"ContainerDied","Data":"b564c9cb677fe65479cd33880a2657cd6d6e68eaf30c0e3bc372a7bc8abcefa2"} Nov 25 12:38:25 crc kubenswrapper[4688]: I1125 12:38:25.195149 4688 scope.go:117] "RemoveContainer" containerID="5c55a39d1d40ac2d31eb50e5d09324dfefde6c4db0d952bd95388f17e9cedca9" Nov 25 12:38:25 crc kubenswrapper[4688]: I1125 12:38:25.195582 4688 scope.go:117] "RemoveContainer" containerID="b564c9cb677fe65479cd33880a2657cd6d6e68eaf30c0e3bc372a7bc8abcefa2" Nov 25 12:38:25 crc kubenswrapper[4688]: I1125 12:38:25.199442 4688 generic.go:334] "Generic (PLEG): container finished" podID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" containerID="e437a3d3c6118fb17df2b434528cdbb5e7e1b688b3c5edff2e93d71b02e82d43" exitCode=1 Nov 25 12:38:25 crc kubenswrapper[4688]: I1125 12:38:25.199482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" event={"ID":"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3","Type":"ContainerDied","Data":"e437a3d3c6118fb17df2b434528cdbb5e7e1b688b3c5edff2e93d71b02e82d43"} Nov 25 12:38:25 crc kubenswrapper[4688]: I1125 12:38:25.200004 4688 scope.go:117] "RemoveContainer" containerID="e437a3d3c6118fb17df2b434528cdbb5e7e1b688b3c5edff2e93d71b02e82d43" Nov 25 12:38:25 crc kubenswrapper[4688]: E1125 12:38:25.200205 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-744bc4ddc8-58c5m_metallb-system(d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3)\"" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" Nov 25 12:38:25 crc kubenswrapper[4688]: I1125 12:38:25.222230 4688 scope.go:117] "RemoveContainer" containerID="a5b7b05aeb801bfe159c9bf755f247fc3537203688d95edb7a84d1b18c96acd8" Nov 25 12:38:26 crc kubenswrapper[4688]: E1125 12:38:26.010743 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c2802c_7143_4d63_8959_434c04453333.slice/crio-bb6a86d65fc4462bb6feb9a615459b67373d496fb832d756e98ec34719d73768.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c2802c_7143_4d63_8959_434c04453333.slice/crio-conmon-bb6a86d65fc4462bb6feb9a615459b67373d496fb832d756e98ec34719d73768.scope\": RecentStats: unable to find data in memory cache]" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.213416 4688 generic.go:334] "Generic (PLEG): container finished" podID="95c2802c-7143-4d63-8959-434c04453333" containerID="bb6a86d65fc4462bb6feb9a615459b67373d496fb832d756e98ec34719d73768" exitCode=1 Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.213514 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95c2802c-7143-4d63-8959-434c04453333","Type":"ContainerDied","Data":"bb6a86d65fc4462bb6feb9a615459b67373d496fb832d756e98ec34719d73768"} Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.213758 4688 scope.go:117] "RemoveContainer" containerID="b564c9cb677fe65479cd33880a2657cd6d6e68eaf30c0e3bc372a7bc8abcefa2" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.214010 4688 scope.go:117] "RemoveContainer" containerID="bb6a86d65fc4462bb6feb9a615459b67373d496fb832d756e98ec34719d73768" Nov 25 12:38:26 crc kubenswrapper[4688]: E1125 12:38:26.214316 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(95c2802c-7143-4d63-8959-434c04453333)\"" pod="openstack/kube-state-metrics-0" podUID="95c2802c-7143-4d63-8959-434c04453333" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.752650 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.752719 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.753562 4688 scope.go:117] "RemoveContainer" containerID="a78ec334f7a185bab3483093e94ec300a9fca3f527f7d466e207635b40d8fe6f" Nov 25 12:38:26 crc kubenswrapper[4688]: E1125 12:38:26.753997 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-q4ffj_openstack-operators(6efe1c76-76a3-4c72-bb71-0963553bbb98)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" podUID="6efe1c76-76a3-4c72-bb71-0963553bbb98" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.771783 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.771869 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.773104 4688 scope.go:117] "RemoveContainer" containerID="03a09b515b289be9eafcba1e515771d88940829b0e20c631145233febd4373e9" Nov 25 12:38:26 crc kubenswrapper[4688]: E1125 12:38:26.773716 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-ptqrp_openstack-operators(87bbdcd1-48cf-4310-9131-93dadc55a0f1)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" podUID="87bbdcd1-48cf-4310-9131-93dadc55a0f1" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.786129 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.786391 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.787373 4688 scope.go:117] "RemoveContainer" containerID="c915f26a85ace737ca4764d385cbe218f64240865722462e72ef9a3ba8e731d2" Nov 25 12:38:26 crc kubenswrapper[4688]: E1125 12:38:26.787936 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-vkj6d_openstack-operators(acc9de1c-caf4-40f2-8e3c-470f1059599a)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" podUID="acc9de1c-caf4-40f2-8e3c-470f1059599a" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.882463 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.882540 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.883133 4688 scope.go:117] "RemoveContainer" containerID="7505cd1ae6725945b67c38e1ec6be41c6f50618b20a4e0865c181c95ea479923" Nov 25 12:38:26 crc kubenswrapper[4688]: E1125 12:38:26.883457 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-2snng_openstack-operators(5c7a1a6d-a3f3-4490-a6ba-f521535a1364)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" podUID="5c7a1a6d-a3f3-4490-a6ba-f521535a1364" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.972016 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.972072 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:38:26 crc kubenswrapper[4688]: I1125 12:38:26.972778 4688 scope.go:117] "RemoveContainer" containerID="64af99afe9e30e35287bb3f2a141bf024f76e8504a621eb2126b6be600b8b062" Nov 25 12:38:26 crc kubenswrapper[4688]: E1125 12:38:26.973053 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-zfvn2_openstack-operators(55967ae9-2dad-4d45-a8c3-bdaa483f9ea7)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" podUID="55967ae9-2dad-4d45-a8c3-bdaa483f9ea7" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.129287 4688 scope.go:117] "RemoveContainer" containerID="fde7330c569918087a32cb1e9b2bfbccf210dd7c3e6a7529008e72b5f6d55ae3" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.138933 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.138981 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.139665 4688 scope.go:117] "RemoveContainer" containerID="a641e915be23b75b2b7c668c132098b64d935b40da136e23321c561559d08127" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.139912 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-tn6tq_openstack-operators(78451e33-7e86-4635-ac5f-d2c6a9ae6e71)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" podUID="78451e33-7e86-4635-ac5f-d2c6a9ae6e71" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.171684 4688 scope.go:117] "RemoveContainer" containerID="d7cbe2f712401aa13df6512e3d3fd1d7bee880dfe01ed24f2f6e9df6d0ba661a" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.197615 4688 scope.go:117] "RemoveContainer" containerID="f91547d0c5aff630284efc79b587b50b2309c1363b1038b4659296ff93dc954d" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.229379 4688 scope.go:117] "RemoveContainer" containerID="0a663f93779b74729816102cefaea90e0d93c5e415bb98ee1fa840f41257187f" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.232218 4688 scope.go:117] "RemoveContainer" containerID="bb6a86d65fc4462bb6feb9a615459b67373d496fb832d756e98ec34719d73768" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.232718 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(95c2802c-7143-4d63-8959-434c04453333)\"" pod="openstack/kube-state-metrics-0" podUID="95c2802c-7143-4d63-8959-434c04453333" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.241000 4688 scope.go:117] "RemoveContainer" containerID="c915f26a85ace737ca4764d385cbe218f64240865722462e72ef9a3ba8e731d2" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.242959 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-vkj6d_openstack-operators(acc9de1c-caf4-40f2-8e3c-470f1059599a)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" podUID="acc9de1c-caf4-40f2-8e3c-470f1059599a" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.260218 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.260279 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.260861 4688 scope.go:117] "RemoveContainer" containerID="f56b13484908f0085350f81e9a31e27628177105102ab67e793422f7d064aa92" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.261218 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-b9jdn_openstack-operators(92794534-2689-4fde-8597-4cc766d7b3b0)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" podUID="92794534-2689-4fde-8597-4cc766d7b3b0" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.494300 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.494357 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.495276 4688 scope.go:117] "RemoveContainer" containerID="6eb7cd25e8c9d67bb4798a378ac7c29959b85c81d07162ab3299db95556e38b5" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.495584 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-9qfpp_openstack-operators(592ea8b1-efc4-4027-a7dc-3943125fd935)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" podUID="592ea8b1-efc4-4027-a7dc-3943125fd935" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.509365 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.509439 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.510208 4688 scope.go:117] "RemoveContainer" containerID="d27a0285ad30e553e8825248e2b404721e7875ba22b9fdd96dd8f7e3665f0d87" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.510479 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-vcnvc_openstack-operators(94f12846-9cbe-4997-9160-3545778ecfde)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" podUID="94f12846-9cbe-4997-9160-3545778ecfde" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.527684 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.528542 4688 scope.go:117] "RemoveContainer" containerID="b7705f63b033a38f714165136cffb97ada13fba9cdd838b4c2d0805f7cd4a869" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.528881 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-4zlm5_openstack-operators(6efa691a-9f05-4d6a-8517-cba5b00426cd)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" podUID="6efa691a-9f05-4d6a-8517-cba5b00426cd" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.534185 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.575267 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.575329 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.576309 4688 scope.go:117] "RemoveContainer" containerID="021d971e8a490d9c20a6d56ede93fb21b48fd3739bfd8bec49775edb8cfab9c9" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.576848 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-94snn_openstack-operators(7cd9dc7e-be06-416a-aebe-c0b160c79697)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" podUID="7cd9dc7e-be06-416a-aebe-c0b160c79697" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.622635 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.622705 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.623379 4688 scope.go:117] "RemoveContainer" containerID="8d4ab18b91ceb000ff66278ec67553e973096e11afaa9fa688ed28fb7a85812e" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.623616 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-ltlms_openstack-operators(808a5b9f-95a2-4f58-abe2-30758a6a7e2a)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" podUID="808a5b9f-95a2-4f58-abe2-30758a6a7e2a" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.683443 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.683583 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.684136 4688 scope.go:117] "RemoveContainer" containerID="018f0a4b05d247b49ab45565f7b322653e48bfcf9634fccec3c0afd267247f7a" Nov 25 12:38:27 crc kubenswrapper[4688]: E1125 12:38:27.684357 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-kvt5r_openstack-operators(e2f91df4-3b39-4c05-9fee-dd3f7622fd13)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" podUID="e2f91df4-3b39-4c05-9fee-dd3f7622fd13" Nov 25 12:38:27 crc kubenswrapper[4688]: I1125 12:38:27.787343 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.050507 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.050913 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.051608 4688 scope.go:117] "RemoveContainer" containerID="851584e27500eb09e6c5c1e67f4cd7d7ea51388786cb21551fb6fa45e7b08965" Nov 25 12:38:28 crc kubenswrapper[4688]: E1125 12:38:28.051832 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-gzslz_openstack-operators(fa49233e-de1b-4bea-85a6-de285e0e60f6)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" podUID="fa49233e-de1b-4bea-85a6-de285e0e60f6" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.189279 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.189347 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.190080 4688 scope.go:117] "RemoveContainer" containerID="b431561d94048b029026382ca01b77e1b1bc1c235b999c22c83953cc336d5ee9" Nov 25 12:38:28 crc kubenswrapper[4688]: E1125 12:38:28.190365 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-gf8vv_openstack-operators(ae188502-8c93-4a53-bb69-b9a964c82bc6)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" podUID="ae188502-8c93-4a53-bb69-b9a964c82bc6" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.249233 4688 scope.go:117] "RemoveContainer" containerID="b7705f63b033a38f714165136cffb97ada13fba9cdd838b4c2d0805f7cd4a869" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.249360 4688 scope.go:117] "RemoveContainer" containerID="018f0a4b05d247b49ab45565f7b322653e48bfcf9634fccec3c0afd267247f7a" Nov 25 12:38:28 crc kubenswrapper[4688]: E1125 12:38:28.249548 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-4zlm5_openstack-operators(6efa691a-9f05-4d6a-8517-cba5b00426cd)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" podUID="6efa691a-9f05-4d6a-8517-cba5b00426cd" Nov 25 12:38:28 crc kubenswrapper[4688]: E1125 12:38:28.249646 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-kvt5r_openstack-operators(e2f91df4-3b39-4c05-9fee-dd3f7622fd13)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" podUID="e2f91df4-3b39-4c05-9fee-dd3f7622fd13" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.450064 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.450115 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.451124 4688 scope.go:117] "RemoveContainer" containerID="8fcb21199572599de638da4bab301d2df24e292b1800125da62c1e6fb99e94af" Nov 25 12:38:28 crc kubenswrapper[4688]: E1125 12:38:28.452135 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-c877c965-jptwb_openstack-operators(3f65195f-4002-4d44-a25c-3c2603ed14c6)\"" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" podUID="3f65195f-4002-4d44-a25c-3c2603ed14c6" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.488293 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.488351 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.489199 4688 scope.go:117] "RemoveContainer" containerID="dceeab541185b6140941770d8dab50a9e7e0a14a215f528b4b18fde69ccf574f" Nov 25 12:38:28 crc kubenswrapper[4688]: E1125 12:38:28.489598 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-q2tdz_openstack-operators(0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" podUID="0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.536096 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.536170 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.536870 4688 scope.go:117] "RemoveContainer" containerID="6550b7e2df2b22b3c044a909d54ab7e182c473df8b8c5b92328410d06422739d" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.582272 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.582362 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.583653 4688 scope.go:117] "RemoveContainer" containerID="aa039b00a08ec05df72b6b8e976de7195f81ce39293da3c4305dd662b2df2964" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.617666 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-dcnc8" Nov 25 12:38:28 crc kubenswrapper[4688]: I1125 12:38:28.733892 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.059452 4688 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.114684 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.149631 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.260858 4688 generic.go:334] "Generic (PLEG): container finished" podID="d4c78fcc-139a-4485-8628-dc14422a4710" containerID="7b28f41ba2e5bfa29e0a5435d196868eedaad48cd1b9f6bd545e3de7cb9ef627" exitCode=1 Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.260916 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" event={"ID":"d4c78fcc-139a-4485-8628-dc14422a4710","Type":"ContainerDied","Data":"7b28f41ba2e5bfa29e0a5435d196868eedaad48cd1b9f6bd545e3de7cb9ef627"} Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.261026 4688 scope.go:117] "RemoveContainer" containerID="aa039b00a08ec05df72b6b8e976de7195f81ce39293da3c4305dd662b2df2964" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.262069 4688 scope.go:117] "RemoveContainer" containerID="7b28f41ba2e5bfa29e0a5435d196868eedaad48cd1b9f6bd545e3de7cb9ef627" Nov 25 12:38:29 crc kubenswrapper[4688]: E1125 12:38:29.262476 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-c76gt_openstack-operators(d4c78fcc-139a-4485-8628-dc14422a4710)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podUID="d4c78fcc-139a-4485-8628-dc14422a4710" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.264873 4688 generic.go:334] "Generic (PLEG): container finished" podID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" containerID="df0f433e30e32f0d3c04ad3c1883e48f5fbd03269f030372bac74384dafaddab" exitCode=1 Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.264926 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" event={"ID":"3649a66a-709f-4b77-b798-e5f90eeb2e5d","Type":"ContainerDied","Data":"df0f433e30e32f0d3c04ad3c1883e48f5fbd03269f030372bac74384dafaddab"} Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.265610 4688 scope.go:117] "RemoveContainer" containerID="df0f433e30e32f0d3c04ad3c1883e48f5fbd03269f030372bac74384dafaddab" Nov 25 12:38:29 crc kubenswrapper[4688]: E1125 12:38:29.265895 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-nxnmg_openstack-operators(3649a66a-709f-4b77-b798-e5f90eeb2e5d)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podUID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.269300 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.318737 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.321242 4688 scope.go:117] "RemoveContainer" containerID="6550b7e2df2b22b3c044a909d54ab7e182c473df8b8c5b92328410d06422739d" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.357746 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.402589 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.418341 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.487051 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-9644ff45d-57xk4" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.567402 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.676668 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.715407 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.751698 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.833226 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.933944 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.943970 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7rt7j" Nov 25 12:38:29 crc kubenswrapper[4688]: I1125 12:38:29.962230 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.007771 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2m9fw" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.149409 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.257426 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.454072 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.490902 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rgdkl" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.502894 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.508271 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.630002 4688 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.766584 4688 scope.go:117] "RemoveContainer" containerID="8417dc9bfb0b29f41ec6ee75bcd1ba3518a269509fa7de6cf91626cd8e1f2974" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.785686 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.811468 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.892243 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.959542 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 12:38:30 crc kubenswrapper[4688]: I1125 12:38:30.992901 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.127564 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.158359 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-q2s8c" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.289539 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-htqjx" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.293185 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" event={"ID":"93553656-ef25-4318-81f1-a4e7f973ed38","Type":"ContainerStarted","Data":"af0595f0847135b8eee76141d94ecc8e009bf8b81ecd5812d8380fdfc081af59"} Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.302599 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.303195 4688 scope.go:117] "RemoveContainer" containerID="e437a3d3c6118fb17df2b434528cdbb5e7e1b688b3c5edff2e93d71b02e82d43" Nov 25 12:38:31 crc kubenswrapper[4688]: E1125 12:38:31.303465 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-744bc4ddc8-58c5m_metallb-system(d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3)\"" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.331208 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.426402 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9n9d8" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.488053 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.518763 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.589400 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.603446 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.658751 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.721220 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.723488 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.740854 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.756999 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.801909 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.823703 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.824471 4688 scope.go:117] "RemoveContainer" containerID="6a4eec9df28bca5ddac3fd28d547788d0603a80aaf0830911613ad36bb1c5089" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.845933 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.959437 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.976427 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 12:38:31 crc kubenswrapper[4688]: I1125 12:38:31.991108 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.003957 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.049959 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-29gmw" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.076382 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.185929 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.198559 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.201146 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.307921 4688 generic.go:334] "Generic (PLEG): container finished" podID="93553656-ef25-4318-81f1-a4e7f973ed38" containerID="af0595f0847135b8eee76141d94ecc8e009bf8b81ecd5812d8380fdfc081af59" exitCode=1 Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.307978 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" event={"ID":"93553656-ef25-4318-81f1-a4e7f973ed38","Type":"ContainerDied","Data":"af0595f0847135b8eee76141d94ecc8e009bf8b81ecd5812d8380fdfc081af59"} Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.308021 4688 scope.go:117] "RemoveContainer" containerID="8417dc9bfb0b29f41ec6ee75bcd1ba3518a269509fa7de6cf91626cd8e1f2974" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.308682 4688 scope.go:117] "RemoveContainer" containerID="af0595f0847135b8eee76141d94ecc8e009bf8b81ecd5812d8380fdfc081af59" Nov 25 12:38:32 crc kubenswrapper[4688]: E1125 12:38:32.308974 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-wf6w6_openstack-operators(93553656-ef25-4318-81f1-a4e7f973ed38)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" podUID="93553656-ef25-4318-81f1-a4e7f973ed38" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.315272 4688 generic.go:334] "Generic (PLEG): container finished" podID="1364865a-3285-428d-b672-064400c43c94" containerID="a70d847ef7212988f7692aeaf761ed11b7cf3fa137aaf318427b182cbb80eb79" exitCode=1 Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.315308 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" event={"ID":"1364865a-3285-428d-b672-064400c43c94","Type":"ContainerDied","Data":"a70d847ef7212988f7692aeaf761ed11b7cf3fa137aaf318427b182cbb80eb79"} Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.316156 4688 scope.go:117] "RemoveContainer" containerID="a70d847ef7212988f7692aeaf761ed11b7cf3fa137aaf318427b182cbb80eb79" Nov 25 12:38:32 crc kubenswrapper[4688]: E1125 12:38:32.316379 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_openstack-operators(1364865a-3285-428d-b672-064400c43c94)\"" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" podUID="1364865a-3285-428d-b672-064400c43c94" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.358878 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.376239 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qf9b4" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.406776 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.411187 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.445572 4688 scope.go:117] "RemoveContainer" containerID="6a4eec9df28bca5ddac3fd28d547788d0603a80aaf0830911613ad36bb1c5089" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.454360 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.459172 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hgvg9" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.461187 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.463264 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.464045 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gmvqm" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.521710 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.527292 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fq4xk" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.591681 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-plml2" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.664922 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.677008 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.693019 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.723799 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.746114 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7lb7g" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.809839 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.812142 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.816633 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.880886 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.887337 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.903673 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.940932 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 12:38:32 crc kubenswrapper[4688]: I1125 12:38:32.987230 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.108377 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.111330 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.126451 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.171703 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.175775 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.184894 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.292043 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-v9r77" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.310084 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.320095 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.373642 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.388504 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.417845 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l8kk9" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.494634 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.507718 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.519381 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.528409 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.577322 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.610648 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.661502 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.670786 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.700726 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-p4mpl" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.708921 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.719102 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.735138 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.738047 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.738081 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.738732 4688 scope.go:117] "RemoveContainer" containerID="bb6a86d65fc4462bb6feb9a615459b67373d496fb832d756e98ec34719d73768" Nov 25 12:38:33 crc kubenswrapper[4688]: E1125 12:38:33.739055 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(95c2802c-7143-4d63-8959-434c04453333)\"" pod="openstack/kube-state-metrics-0" podUID="95c2802c-7143-4d63-8959-434c04453333" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.776433 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.776433 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.791894 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.858967 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-7r9qr" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.864450 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.875108 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.900951 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.907548 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 12:38:33 crc kubenswrapper[4688]: I1125 12:38:33.930212 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.018251 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.026443 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.087937 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.088707 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.093806 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.172249 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.200858 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.204435 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.294285 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.364603 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.387038 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.393826 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.451308 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.459588 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.508196 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.606452 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.677464 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.683429 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.707022 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.711492 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.723369 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.879391 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.902188 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.903776 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gn9hd" Nov 25 12:38:34 crc kubenswrapper[4688]: I1125 12:38:34.904770 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.010509 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.052336 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.065625 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.140320 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.192986 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.232659 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.240935 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.246147 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.279678 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.287277 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.325410 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.348504 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.375553 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-c2p2v" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.393839 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.406828 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.412166 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2lv7t" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.422792 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.440960 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.466408 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.499709 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.504044 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.569798 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.576319 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.614379 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.619758 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.654302 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.673748 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.749422 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.810426 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.827190 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.926400 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.930417 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.938161 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.956799 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.971545 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.972471 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 12:38:35 crc kubenswrapper[4688]: I1125 12:38:35.984356 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.048253 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.049222 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c56v7" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.050448 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.066468 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wxlh5" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.067640 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.091146 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.148057 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.148178 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.174054 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.188033 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.229111 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fzrzv" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.229413 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.252397 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.253608 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.255919 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.284871 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.285118 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.292632 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.292632 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.296243 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-psxz2" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.357002 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.373029 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.376696 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.426058 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.428376 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.458457 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.465941 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7xlf2" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.477489 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.486103 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9l96t" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.541168 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.577099 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.577270 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.669268 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kz9g5" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.669497 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.678806 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.722876 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-prjxn" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.758834 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zhzbv" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.776670 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.801351 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.830386 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.832870 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.856755 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.892800 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.927974 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.941598 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.948003 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 12:38:36 crc kubenswrapper[4688]: I1125 12:38:36.985863 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.027206 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.027705 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.039377 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.046198 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.096492 4688 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.098918 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-86cvc" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.126928 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.158478 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.187551 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.276810 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.349028 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.351763 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.360467 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.379707 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.381349 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.395351 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.407144 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.418275 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.456727 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.466084 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.504279 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.526248 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.527444 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.553773 4688 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.565103 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.616982 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.622044 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.628380 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.640137 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.647701 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.664770 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.677087 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.724929 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.724946 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.727961 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.739700 4688 scope.go:117] "RemoveContainer" containerID="03a09b515b289be9eafcba1e515771d88940829b0e20c631145233febd4373e9" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.739837 4688 scope.go:117] "RemoveContainer" containerID="f56b13484908f0085350f81e9a31e27628177105102ab67e793422f7d064aa92" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.755252 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.756414 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.777386 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.782951 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qzhjq" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.783416 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.793041 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.818716 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.891873 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kccm7" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.892827 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.957795 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 12:38:37 crc kubenswrapper[4688]: I1125 12:38:37.966972 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.005823 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cmtn6" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.022852 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.130620 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.143318 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.156288 4688 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.159179 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.178100 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.178899 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.191035 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.194414 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-csmhw" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.210704 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nfg9m" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.261637 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.269327 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.282627 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.300058 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.330746 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.351154 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.391676 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.395560 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" event={"ID":"87bbdcd1-48cf-4310-9131-93dadc55a0f1","Type":"ContainerStarted","Data":"d13434930fd15bc18cf89cb017d926dcb681522f2fa1cb4b3fb3c9c84980acae"} Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.396851 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.400626 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" event={"ID":"92794534-2689-4fde-8597-4cc766d7b3b0","Type":"ContainerStarted","Data":"aa01f80990e0da5623eae88896bdd306a6ea13b5d6079ee9b8e116df05c53c99"} Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.401066 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.495105 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.497160 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.535197 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.535969 4688 scope.go:117] "RemoveContainer" containerID="df0f433e30e32f0d3c04ad3c1883e48f5fbd03269f030372bac74384dafaddab" Nov 25 12:38:38 crc kubenswrapper[4688]: E1125 12:38:38.536381 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-nxnmg_openstack-operators(3649a66a-709f-4b77-b798-e5f90eeb2e5d)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podUID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.567137 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.577934 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.582274 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.583309 4688 scope.go:117] "RemoveContainer" containerID="7b28f41ba2e5bfa29e0a5435d196868eedaad48cd1b9f6bd545e3de7cb9ef627" Nov 25 12:38:38 crc kubenswrapper[4688]: E1125 12:38:38.583699 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-c76gt_openstack-operators(d4c78fcc-139a-4485-8628-dc14422a4710)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podUID="d4c78fcc-139a-4485-8628-dc14422a4710" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.603793 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.644198 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tbt79" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.657457 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.680800 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.699383 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.740233 4688 scope.go:117] "RemoveContainer" containerID="8fcb21199572599de638da4bab301d2df24e292b1800125da62c1e6fb99e94af" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.740477 4688 scope.go:117] "RemoveContainer" containerID="6eb7cd25e8c9d67bb4798a378ac7c29959b85c81d07162ab3299db95556e38b5" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.740678 4688 scope.go:117] "RemoveContainer" containerID="d27a0285ad30e553e8825248e2b404721e7875ba22b9fdd96dd8f7e3665f0d87" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.740845 4688 scope.go:117] "RemoveContainer" containerID="b7705f63b033a38f714165136cffb97ada13fba9cdd838b4c2d0805f7cd4a869" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.741095 4688 scope.go:117] "RemoveContainer" containerID="c915f26a85ace737ca4764d385cbe218f64240865722462e72ef9a3ba8e731d2" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.748571 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gbxsr" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.748853 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.772519 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.777336 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.790621 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.792135 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.811988 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.812294 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.815408 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.816365 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.848699 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.882346 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jhscs" Nov 25 12:38:38 crc kubenswrapper[4688]: I1125 12:38:38.929658 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.003083 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.069303 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.082246 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.092386 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.134785 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.191278 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.225252 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.226834 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.234221 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.245428 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.276730 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.282247 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.300030 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.324553 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.412395 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.412852 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" event={"ID":"592ea8b1-efc4-4027-a7dc-3943125fd935","Type":"ContainerStarted","Data":"ea2cef178a05c51b3108b89787a42eb8aed671c4e5c4752a625ca738dada1c5e"} Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.413130 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.414727 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.417122 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" event={"ID":"6efa691a-9f05-4d6a-8517-cba5b00426cd","Type":"ContainerStarted","Data":"108e5304bcfe8bbdc1d869b9cf178be0810be2921d8c4a774ae9e0f7ae430bde"} Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.417391 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.420269 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" event={"ID":"94f12846-9cbe-4997-9160-3545778ecfde","Type":"ContainerStarted","Data":"c9d2ea310cd44f7add17dbf7fc27eeb33a3c1de371071df374e1dc92262295bb"} Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.420483 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.423300 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" event={"ID":"acc9de1c-caf4-40f2-8e3c-470f1059599a","Type":"ContainerStarted","Data":"6a7bcdf6a9002a9218c8f4e13981f0953c158c9ce1a7eb44e987304e2b5d7188"} Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.423532 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.425811 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" event={"ID":"3f65195f-4002-4d44-a25c-3c2603ed14c6","Type":"ContainerStarted","Data":"d56026e6d0b6d5d98a5b2b784a98ec32f82ddec02a1779f9966558f73c2ebbdd"} Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.426202 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.442603 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.457390 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.478606 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.495575 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.506768 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.515103 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.576043 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.579110 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.618821 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rpj24" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.618821 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-dh289" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.668038 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.706549 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.740340 4688 scope.go:117] "RemoveContainer" containerID="a641e915be23b75b2b7c668c132098b64d935b40da136e23321c561559d08127" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.742560 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cc2v5" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.744918 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.843715 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.851006 4688 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.865022 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.865104 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.869452 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.898375 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.899773 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.899753302 podStartE2EDuration="21.899753302s" podCreationTimestamp="2025-11-25 12:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 12:38:39.88589887 +0000 UTC m=+1469.995527738" watchObservedRunningTime="2025-11-25 12:38:39.899753302 +0000 UTC m=+1470.009382180" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.902556 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gx949" Nov 25 12:38:39 crc kubenswrapper[4688]: I1125 12:38:39.915768 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.003101 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xh7dt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.048385 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.064888 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.084496 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.104504 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.131008 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.203093 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.223111 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.281159 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.339912 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.351469 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.384663 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.434549 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.436999 4688 generic.go:334] "Generic (PLEG): container finished" podID="3d15c9cb-bc3f-4042-a05e-1a6e66e4348c" containerID="5001e9df24bae42e2e3036bb7ac2ed906c4ba6793e7df9fe830c2f537bee5332" exitCode=1 Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.437068 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" event={"ID":"3d15c9cb-bc3f-4042-a05e-1a6e66e4348c","Type":"ContainerDied","Data":"5001e9df24bae42e2e3036bb7ac2ed906c4ba6793e7df9fe830c2f537bee5332"} Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.437787 4688 scope.go:117] "RemoveContainer" containerID="5001e9df24bae42e2e3036bb7ac2ed906c4ba6793e7df9fe830c2f537bee5332" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.441665 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" event={"ID":"78451e33-7e86-4635-ac5f-d2c6a9ae6e71","Type":"ContainerStarted","Data":"8079691f0641d98d4d145eb86cf822f31a46c8f3767bd12f822457c7cace13c1"} Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.463422 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.464605 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.516084 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.540474 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f7h8n" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.629237 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.665091 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jlqvn" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.676236 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9kv58" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.746061 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.747839 4688 scope.go:117] "RemoveContainer" containerID="a78ec334f7a185bab3483093e94ec300a9fca3f527f7d466e207635b40d8fe6f" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.748621 4688 scope.go:117] "RemoveContainer" containerID="64af99afe9e30e35287bb3f2a141bf024f76e8504a621eb2126b6be600b8b062" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.749066 4688 scope.go:117] "RemoveContainer" containerID="7505cd1ae6725945b67c38e1ec6be41c6f50618b20a4e0865c181c95ea479923" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.749191 4688 scope.go:117] "RemoveContainer" containerID="dceeab541185b6140941770d8dab50a9e7e0a14a215f528b4b18fde69ccf574f" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.749323 4688 scope.go:117] "RemoveContainer" containerID="021d971e8a490d9c20a6d56ede93fb21b48fd3739bfd8bec49775edb8cfab9c9" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.749881 4688 scope.go:117] "RemoveContainer" containerID="8d4ab18b91ceb000ff66278ec67553e973096e11afaa9fa688ed28fb7a85812e" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.780374 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.796216 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.821301 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.908740 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.909310 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.915919 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.950912 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 12:38:40 crc kubenswrapper[4688]: I1125 12:38:40.953357 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.001690 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.018738 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.022537 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.124174 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.168028 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.255370 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.287646 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-g74wp" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.298144 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.375517 4688 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.375794 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://57dde6a7c4f85cf4c3e2af6ab924179275e8ede39251ec5a8f2f42561f4f95b1" gracePeriod=5 Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.392027 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.454164 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" event={"ID":"0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d","Type":"ContainerStarted","Data":"f63a64cbd89b350a8e7fcbb8bc185ab91cc02f892f7b5ba353bf0a99c58ebb33"} Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.454702 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.457342 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" event={"ID":"5c7a1a6d-a3f3-4490-a6ba-f521535a1364","Type":"ContainerStarted","Data":"d7359c8a94b36b071e7fd9903cf2359fd8507ab52b9a64461f4fdddfc8632eec"} Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.457646 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.460440 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" event={"ID":"6efe1c76-76a3-4c72-bb71-0963553bbb98","Type":"ContainerStarted","Data":"9b8681a1b4b7ac2df00e21ec8c7b8ec1afe1d89340206b5e6414ce16d9c4c82c"} Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.461089 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.463936 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" event={"ID":"7cd9dc7e-be06-416a-aebe-c0b160c79697","Type":"ContainerStarted","Data":"28d32d9a1b1d145e10e4bfdc0ce248fdc499e6756ac2d14a9f4ac082f09ef84d"} Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.464114 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.466193 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" event={"ID":"55967ae9-2dad-4d45-a8c3-bdaa483f9ea7","Type":"ContainerStarted","Data":"71fa1b11d8df5bfe1878f01dfebd163197edea2ef50e605b47027cc598c1e969"} Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.466396 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.468373 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-d2tx4" event={"ID":"3d15c9cb-bc3f-4042-a05e-1a6e66e4348c","Type":"ContainerStarted","Data":"00e10e65791a633ca66468732395d8b405b0c7f848f1b4d1c41f021f737766f8"} Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.471498 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" event={"ID":"808a5b9f-95a2-4f58-abe2-30758a6a7e2a","Type":"ContainerStarted","Data":"0f3f768e08192af879c06f9177c0fbdba29db85591149c30116f4f8242d63538"} Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.472222 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.508146 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.534380 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.540070 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rjthr" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.582890 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.651753 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.677761 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9vktg" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.720411 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.740481 4688 scope.go:117] "RemoveContainer" containerID="b431561d94048b029026382ca01b77e1b1bc1c235b999c22c83953cc336d5ee9" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.767989 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.825847 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.826503 4688 scope.go:117] "RemoveContainer" containerID="a70d847ef7212988f7692aeaf761ed11b7cf3fa137aaf318427b182cbb80eb79" Nov 25 12:38:41 crc kubenswrapper[4688]: E1125 12:38:41.826732 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_openstack-operators(1364865a-3285-428d-b672-064400c43c94)\"" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" podUID="1364865a-3285-428d-b672-064400c43c94" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.836090 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.959626 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.962964 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 12:38:41 crc kubenswrapper[4688]: I1125 12:38:41.995604 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.038693 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.093492 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.108900 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.171702 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.177557 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.270322 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.352059 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rvmp5" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.469127 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.484445 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" event={"ID":"ae188502-8c93-4a53-bb69-b9a964c82bc6","Type":"ContainerStarted","Data":"9ebcd038c452b8d082641feb9b05028f2815635f534b0bc9400cb08e1e393ec2"} Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.484802 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.589141 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-n5wsz" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.589785 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.623627 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.658226 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.717785 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.726361 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.740164 4688 scope.go:117] "RemoveContainer" containerID="018f0a4b05d247b49ab45565f7b322653e48bfcf9634fccec3c0afd267247f7a" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.740591 4688 scope.go:117] "RemoveContainer" containerID="851584e27500eb09e6c5c1e67f4cd7d7ea51388786cb21551fb6fa45e7b08965" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.745026 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.839036 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.865694 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.890512 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 12:38:42 crc kubenswrapper[4688]: I1125 12:38:42.955993 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-g5kd9" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.236683 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.339937 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.494732 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" event={"ID":"fa49233e-de1b-4bea-85a6-de285e0e60f6","Type":"ContainerStarted","Data":"c9cce2a6d7189059161015f5260fb4cd4167ea1bae95a7237b0fc1739614c36e"} Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.495407 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.498024 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" event={"ID":"e2f91df4-3b39-4c05-9fee-dd3f7622fd13","Type":"ContainerStarted","Data":"76fa9f62315fbf3ce718cbea342b8ba9b568f2ba80e0bc281084705be252bca1"} Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.498999 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.595222 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.740676 4688 scope.go:117] "RemoveContainer" containerID="e437a3d3c6118fb17df2b434528cdbb5e7e1b688b3c5edff2e93d71b02e82d43" Nov 25 12:38:43 crc kubenswrapper[4688]: E1125 12:38:43.741199 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-744bc4ddc8-58c5m_metallb-system(d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3)\"" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" podUID="d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.843960 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.846045 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.860790 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 12:38:43 crc kubenswrapper[4688]: I1125 12:38:43.924795 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.084688 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.101099 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.168581 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.246059 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.316055 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.504317 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.560172 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.741577 4688 scope.go:117] "RemoveContainer" containerID="bb6a86d65fc4462bb6feb9a615459b67373d496fb832d756e98ec34719d73768" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.805371 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.839396 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 12:38:44 crc kubenswrapper[4688]: I1125 12:38:44.870241 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 12:38:45 crc kubenswrapper[4688]: I1125 12:38:45.107956 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wzwvc" Nov 25 12:38:45 crc kubenswrapper[4688]: I1125 12:38:45.372414 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 12:38:45 crc kubenswrapper[4688]: I1125 12:38:45.522862 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"95c2802c-7143-4d63-8959-434c04453333","Type":"ContainerStarted","Data":"9a5a276e93c99009689073d6f895b51b7b41d6026ded170c5eef4196bb03420f"} Nov 25 12:38:45 crc kubenswrapper[4688]: I1125 12:38:45.523065 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 12:38:45 crc kubenswrapper[4688]: I1125 12:38:45.739840 4688 scope.go:117] "RemoveContainer" containerID="af0595f0847135b8eee76141d94ecc8e009bf8b81ecd5812d8380fdfc081af59" Nov 25 12:38:45 crc kubenswrapper[4688]: E1125 12:38:45.740394 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-wf6w6_openstack-operators(93553656-ef25-4318-81f1-a4e7f973ed38)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" podUID="93553656-ef25-4318-81f1-a4e7f973ed38" Nov 25 12:38:46 crc kubenswrapper[4688]: I1125 12:38:46.542408 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 12:38:46 crc kubenswrapper[4688]: I1125 12:38:46.542942 4688 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="57dde6a7c4f85cf4c3e2af6ab924179275e8ede39251ec5a8f2f42561f4f95b1" exitCode=137 Nov 25 12:38:46 crc kubenswrapper[4688]: I1125 12:38:46.753896 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-q4ffj" Nov 25 12:38:46 crc kubenswrapper[4688]: I1125 12:38:46.778689 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-ptqrp" Nov 25 12:38:46 crc kubenswrapper[4688]: I1125 12:38:46.788917 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-vkj6d" Nov 25 12:38:46 crc kubenswrapper[4688]: I1125 12:38:46.884222 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2snng" Nov 25 12:38:46 crc kubenswrapper[4688]: I1125 12:38:46.974000 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-zfvn2" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.022844 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.022922 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.138696 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.140620 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-tn6tq" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168053 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168126 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168195 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168243 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168288 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168341 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168422 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168464 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.168556 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.169164 4688 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.169206 4688 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.169216 4688 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.169224 4688 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.181006 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.262597 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-b9jdn" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.270557 4688 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.496966 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-9qfpp" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.511579 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-vcnvc" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.530733 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-4zlm5" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.559189 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.559333 4688 scope.go:117] "RemoveContainer" containerID="57dde6a7c4f85cf4c3e2af6ab924179275e8ede39251ec5a8f2f42561f4f95b1" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.559401 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.580394 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-94snn" Nov 25 12:38:47 crc kubenswrapper[4688]: I1125 12:38:47.624220 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-ltlms" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.052851 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-gzslz" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.192328 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-gf8vv" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.452099 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-c877c965-jptwb" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.493226 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-q2tdz" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.535196 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.536327 4688 scope.go:117] "RemoveContainer" containerID="df0f433e30e32f0d3c04ad3c1883e48f5fbd03269f030372bac74384dafaddab" Nov 25 12:38:48 crc kubenswrapper[4688]: E1125 12:38:48.537468 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-nxnmg_openstack-operators(3649a66a-709f-4b77-b798-e5f90eeb2e5d)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" podUID="3649a66a-709f-4b77-b798-e5f90eeb2e5d" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.582975 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.583700 4688 scope.go:117] "RemoveContainer" containerID="7b28f41ba2e5bfa29e0a5435d196868eedaad48cd1b9f6bd545e3de7cb9ef627" Nov 25 12:38:48 crc kubenswrapper[4688]: E1125 12:38:48.583952 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-c76gt_openstack-operators(d4c78fcc-139a-4485-8628-dc14422a4710)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" podUID="d4c78fcc-139a-4485-8628-dc14422a4710" Nov 25 12:38:48 crc kubenswrapper[4688]: I1125 12:38:48.753043 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 12:38:51 crc kubenswrapper[4688]: I1125 12:38:51.823931 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:38:51 crc kubenswrapper[4688]: I1125 12:38:51.824799 4688 scope.go:117] "RemoveContainer" containerID="a70d847ef7212988f7692aeaf761ed11b7cf3fa137aaf318427b182cbb80eb79" Nov 25 12:38:51 crc kubenswrapper[4688]: E1125 12:38:51.825004 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_openstack-operators(1364865a-3285-428d-b672-064400c43c94)\"" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" podUID="1364865a-3285-428d-b672-064400c43c94" Nov 25 12:38:53 crc kubenswrapper[4688]: I1125 12:38:53.750383 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 12:38:55 crc kubenswrapper[4688]: I1125 12:38:55.163513 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 12:38:57 crc kubenswrapper[4688]: I1125 12:38:57.691247 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-kvt5r" Nov 25 12:38:58 crc kubenswrapper[4688]: I1125 12:38:58.282908 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 12:38:58 crc kubenswrapper[4688]: I1125 12:38:58.740914 4688 scope.go:117] "RemoveContainer" containerID="af0595f0847135b8eee76141d94ecc8e009bf8b81ecd5812d8380fdfc081af59" Nov 25 12:38:58 crc kubenswrapper[4688]: I1125 12:38:58.741481 4688 scope.go:117] "RemoveContainer" containerID="e437a3d3c6118fb17df2b434528cdbb5e7e1b688b3c5edff2e93d71b02e82d43" Nov 25 12:38:59 crc kubenswrapper[4688]: I1125 12:38:59.668333 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" event={"ID":"d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3","Type":"ContainerStarted","Data":"60ca68361c378175d1ba12c71c37b1de56ca36d3801d00886e5ab64bb91bddd7"} Nov 25 12:38:59 crc kubenswrapper[4688]: I1125 12:38:59.668867 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:38:59 crc kubenswrapper[4688]: I1125 12:38:59.670142 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wf6w6" event={"ID":"93553656-ef25-4318-81f1-a4e7f973ed38","Type":"ContainerStarted","Data":"bde3f967beb5a4b59ce6d592a0e1c483a08154ee0e5fe2e3019aaf5cd2365a6e"} Nov 25 12:38:59 crc kubenswrapper[4688]: I1125 12:38:59.739544 4688 scope.go:117] "RemoveContainer" containerID="7b28f41ba2e5bfa29e0a5435d196868eedaad48cd1b9f6bd545e3de7cb9ef627" Nov 25 12:39:00 crc kubenswrapper[4688]: I1125 12:39:00.682503 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" event={"ID":"d4c78fcc-139a-4485-8628-dc14422a4710","Type":"ContainerStarted","Data":"aa826d082f0e332c4e2839d498bb1c34c046bd65af65e7f8b11d712e087e16e5"} Nov 25 12:39:00 crc kubenswrapper[4688]: I1125 12:39:00.683109 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:39:01 crc kubenswrapper[4688]: I1125 12:39:01.242363 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 12:39:02 crc kubenswrapper[4688]: I1125 12:39:02.740388 4688 scope.go:117] "RemoveContainer" containerID="df0f433e30e32f0d3c04ad3c1883e48f5fbd03269f030372bac74384dafaddab" Nov 25 12:39:02 crc kubenswrapper[4688]: I1125 12:39:02.742887 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jlmrt" Nov 25 12:39:03 crc kubenswrapper[4688]: I1125 12:39:03.711871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" event={"ID":"3649a66a-709f-4b77-b798-e5f90eeb2e5d","Type":"ContainerStarted","Data":"37326284031cab84aa6795c7bd6ff188f64e4b707bb765adc452df410bf7d1f5"} Nov 25 12:39:03 crc kubenswrapper[4688]: I1125 12:39:03.712444 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:39:04 crc kubenswrapper[4688]: I1125 12:39:04.362185 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7c6np" Nov 25 12:39:04 crc kubenswrapper[4688]: I1125 12:39:04.740203 4688 scope.go:117] "RemoveContainer" containerID="a70d847ef7212988f7692aeaf761ed11b7cf3fa137aaf318427b182cbb80eb79" Nov 25 12:39:05 crc kubenswrapper[4688]: I1125 12:39:05.746825 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" event={"ID":"1364865a-3285-428d-b672-064400c43c94","Type":"ContainerStarted","Data":"7eecfafa83399629bb99ee37b686a44d5b69b32c01ec67eb411bf56adb713482"} Nov 25 12:39:05 crc kubenswrapper[4688]: I1125 12:39:05.747547 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:39:08 crc kubenswrapper[4688]: I1125 12:39:08.541098 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-nxnmg" Nov 25 12:39:08 crc kubenswrapper[4688]: I1125 12:39:08.586611 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-c76gt" Nov 25 12:39:09 crc kubenswrapper[4688]: I1125 12:39:09.426257 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 12:39:09 crc kubenswrapper[4688]: I1125 12:39:09.726645 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 12:39:10 crc kubenswrapper[4688]: I1125 12:39:10.602103 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.804593 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kspxw"] Nov 25 12:39:11 crc kubenswrapper[4688]: E1125 12:39:11.805109 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" containerName="installer" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.805125 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" containerName="installer" Nov 25 12:39:11 crc kubenswrapper[4688]: E1125 12:39:11.805148 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.805155 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.805427 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.805460 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d99bb2-102b-4fb3-8f5c-1ffaa1ccbf62" containerName="installer" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.807133 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.816421 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kspxw"] Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.865123 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6bdd9b6cb6-vgfmk" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.941947 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-utilities\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.942265 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-catalog-content\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:11 crc kubenswrapper[4688]: I1125 12:39:11.942357 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmp65\" (UniqueName: \"kubernetes.io/projected/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-kube-api-access-tmp65\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.004220 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vksfl"] Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.006790 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.013773 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vksfl"] Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.043835 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-utilities\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.043879 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-catalog-content\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.043977 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmp65\" (UniqueName: \"kubernetes.io/projected/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-kube-api-access-tmp65\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.044671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-catalog-content\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.044668 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-utilities\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.082734 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmp65\" (UniqueName: \"kubernetes.io/projected/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-kube-api-access-tmp65\") pod \"community-operators-kspxw\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.146452 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-utilities\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.146566 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjh8t\" (UniqueName: \"kubernetes.io/projected/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-kube-api-access-wjh8t\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.146620 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-catalog-content\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.167826 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.248353 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjh8t\" (UniqueName: \"kubernetes.io/projected/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-kube-api-access-wjh8t\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.248432 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-catalog-content\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.248619 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-utilities\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.249133 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-utilities\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.249433 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-catalog-content\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.283045 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.288603 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjh8t\" (UniqueName: \"kubernetes.io/projected/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-kube-api-access-wjh8t\") pod \"certified-operators-vksfl\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.328031 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:12 crc kubenswrapper[4688]: I1125 12:39:12.990458 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kspxw"] Nov 25 12:39:13 crc kubenswrapper[4688]: W1125 12:39:13.027137 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ba77c3_6f63_461a_bd45_4b8d4b02fdd7.slice/crio-eed2bf1cda88b864da29f289a7f276b2c589923c0edace6ed53001efbac4ef5d WatchSource:0}: Error finding container eed2bf1cda88b864da29f289a7f276b2c589923c0edace6ed53001efbac4ef5d: Status 404 returned error can't find the container with id eed2bf1cda88b864da29f289a7f276b2c589923c0edace6ed53001efbac4ef5d Nov 25 12:39:13 crc kubenswrapper[4688]: I1125 12:39:13.107200 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vksfl"] Nov 25 12:39:13 crc kubenswrapper[4688]: W1125 12:39:13.110646 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e94a2fb_bb97_4eef_a856_35db9d6f67a5.slice/crio-036236cb5e0f014132178e1f9593aa474ca7d3c8e3b97921190e215388010eff WatchSource:0}: Error finding container 036236cb5e0f014132178e1f9593aa474ca7d3c8e3b97921190e215388010eff: Status 404 returned error can't find the container with id 036236cb5e0f014132178e1f9593aa474ca7d3c8e3b97921190e215388010eff Nov 25 12:39:13 crc kubenswrapper[4688]: I1125 12:39:13.838365 4688 generic.go:334] "Generic (PLEG): container finished" podID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerID="0170e2d66d3ead0b5af27b1323186b747a1ce93282089f6c1a04307645b6eee0" exitCode=0 Nov 25 12:39:13 crc kubenswrapper[4688]: I1125 12:39:13.838614 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vksfl" event={"ID":"4e94a2fb-bb97-4eef-a856-35db9d6f67a5","Type":"ContainerDied","Data":"0170e2d66d3ead0b5af27b1323186b747a1ce93282089f6c1a04307645b6eee0"} Nov 25 12:39:13 crc kubenswrapper[4688]: I1125 12:39:13.838662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vksfl" event={"ID":"4e94a2fb-bb97-4eef-a856-35db9d6f67a5","Type":"ContainerStarted","Data":"036236cb5e0f014132178e1f9593aa474ca7d3c8e3b97921190e215388010eff"} Nov 25 12:39:13 crc kubenswrapper[4688]: I1125 12:39:13.840783 4688 generic.go:334] "Generic (PLEG): container finished" podID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerID="ca215dfb7e164c885bcc85b59dd0c9c8ce5de8c90d11dfa3453a5f5ccd99020c" exitCode=0 Nov 25 12:39:13 crc kubenswrapper[4688]: I1125 12:39:13.840828 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kspxw" event={"ID":"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7","Type":"ContainerDied","Data":"ca215dfb7e164c885bcc85b59dd0c9c8ce5de8c90d11dfa3453a5f5ccd99020c"} Nov 25 12:39:13 crc kubenswrapper[4688]: I1125 12:39:13.840863 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kspxw" event={"ID":"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7","Type":"ContainerStarted","Data":"eed2bf1cda88b864da29f289a7f276b2c589923c0edace6ed53001efbac4ef5d"} Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.397504 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdrj"] Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.400168 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.419384 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdrj"] Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.496955 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-utilities\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.497073 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-catalog-content\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.497123 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45l6n\" (UniqueName: \"kubernetes.io/projected/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-kube-api-access-45l6n\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.599380 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-utilities\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.599892 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-catalog-content\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.599950 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45l6n\" (UniqueName: \"kubernetes.io/projected/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-kube-api-access-45l6n\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.600167 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-utilities\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.600212 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-catalog-content\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.611588 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t4drg"] Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.614331 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.644301 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4drg"] Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.660152 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45l6n\" (UniqueName: \"kubernetes.io/projected/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-kube-api-access-45l6n\") pod \"redhat-marketplace-kcdrj\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.701325 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-catalog-content\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.701477 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xsfh\" (UniqueName: \"kubernetes.io/projected/dd5c074d-ea43-461d-b137-686d1c19e8a8-kube-api-access-8xsfh\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.701634 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-utilities\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.775655 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.803080 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-utilities\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.803203 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-catalog-content\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.803301 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xsfh\" (UniqueName: \"kubernetes.io/projected/dd5c074d-ea43-461d-b137-686d1c19e8a8-kube-api-access-8xsfh\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.804757 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-utilities\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.805397 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-catalog-content\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.833335 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xsfh\" (UniqueName: \"kubernetes.io/projected/dd5c074d-ea43-461d-b137-686d1c19e8a8-kube-api-access-8xsfh\") pod \"redhat-operators-t4drg\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.907128 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vksfl" event={"ID":"4e94a2fb-bb97-4eef-a856-35db9d6f67a5","Type":"ContainerStarted","Data":"8bc6c0bdb15bfef8387b1a01c93e13ff8de2b2c320d4c049c6870c6ace37a029"} Nov 25 12:39:14 crc kubenswrapper[4688]: I1125 12:39:14.911793 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kspxw" event={"ID":"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7","Type":"ContainerStarted","Data":"5bfe1a59dd8724acfc1ee2e74cef9e650fa8c5114c18f927b4fcd54e978f47d8"} Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.004315 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.479633 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdrj"] Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.812562 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4drg"] Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.922912 4688 generic.go:334] "Generic (PLEG): container finished" podID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerID="8bc6c0bdb15bfef8387b1a01c93e13ff8de2b2c320d4c049c6870c6ace37a029" exitCode=0 Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.922995 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vksfl" event={"ID":"4e94a2fb-bb97-4eef-a856-35db9d6f67a5","Type":"ContainerDied","Data":"8bc6c0bdb15bfef8387b1a01c93e13ff8de2b2c320d4c049c6870c6ace37a029"} Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.925861 4688 generic.go:334] "Generic (PLEG): container finished" podID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerID="2fe22dd86314905acb68a4022e14ae4436c691b6b3192ce7a792d8184f7f21a0" exitCode=0 Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.925930 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdrj" event={"ID":"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0","Type":"ContainerDied","Data":"2fe22dd86314905acb68a4022e14ae4436c691b6b3192ce7a792d8184f7f21a0"} Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.925953 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdrj" event={"ID":"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0","Type":"ContainerStarted","Data":"9a01232233cc5e92689e02319048ecb95be002587fb4ea132dee8f231136d26c"} Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.936887 4688 generic.go:334] "Generic (PLEG): container finished" podID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerID="5bfe1a59dd8724acfc1ee2e74cef9e650fa8c5114c18f927b4fcd54e978f47d8" exitCode=0 Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.936955 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kspxw" event={"ID":"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7","Type":"ContainerDied","Data":"5bfe1a59dd8724acfc1ee2e74cef9e650fa8c5114c18f927b4fcd54e978f47d8"} Nov 25 12:39:15 crc kubenswrapper[4688]: I1125 12:39:15.943507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4drg" event={"ID":"dd5c074d-ea43-461d-b137-686d1c19e8a8","Type":"ContainerStarted","Data":"ddc28c0a7a25f250f016b749a782635ae911827a2f621317d4b17023c1edb2e3"} Nov 25 12:39:16 crc kubenswrapper[4688]: I1125 12:39:16.955787 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kspxw" event={"ID":"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7","Type":"ContainerStarted","Data":"71d8d6c8636fed2afdf020ea0e4088ed637f700dd8dd856fc8c11a88ba633c1b"} Nov 25 12:39:16 crc kubenswrapper[4688]: I1125 12:39:16.957830 4688 generic.go:334] "Generic (PLEG): container finished" podID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerID="db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a" exitCode=0 Nov 25 12:39:16 crc kubenswrapper[4688]: I1125 12:39:16.957869 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4drg" event={"ID":"dd5c074d-ea43-461d-b137-686d1c19e8a8","Type":"ContainerDied","Data":"db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a"} Nov 25 12:39:16 crc kubenswrapper[4688]: I1125 12:39:16.960459 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vksfl" event={"ID":"4e94a2fb-bb97-4eef-a856-35db9d6f67a5","Type":"ContainerStarted","Data":"97cbb8faa0ef32aa2a97cd4e932caa884dfaf248b1be6cf338fb065e6e0f12c9"} Nov 25 12:39:16 crc kubenswrapper[4688]: I1125 12:39:16.976436 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kspxw" podStartSLOduration=3.237809332 podStartE2EDuration="5.976418887s" podCreationTimestamp="2025-11-25 12:39:11 +0000 UTC" firstStartedPulling="2025-11-25 12:39:13.842536987 +0000 UTC m=+1503.952165915" lastFinishedPulling="2025-11-25 12:39:16.581146602 +0000 UTC m=+1506.690775470" observedRunningTime="2025-11-25 12:39:16.971500446 +0000 UTC m=+1507.081129324" watchObservedRunningTime="2025-11-25 12:39:16.976418887 +0000 UTC m=+1507.086047755" Nov 25 12:39:17 crc kubenswrapper[4688]: I1125 12:39:17.000334 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vksfl" podStartSLOduration=3.511727588 podStartE2EDuration="6.000317269s" podCreationTimestamp="2025-11-25 12:39:11 +0000 UTC" firstStartedPulling="2025-11-25 12:39:13.840753109 +0000 UTC m=+1503.950381977" lastFinishedPulling="2025-11-25 12:39:16.32934279 +0000 UTC m=+1506.438971658" observedRunningTime="2025-11-25 12:39:16.994882773 +0000 UTC m=+1507.104511641" watchObservedRunningTime="2025-11-25 12:39:17.000317269 +0000 UTC m=+1507.109946137" Nov 25 12:39:17 crc kubenswrapper[4688]: I1125 12:39:17.992148 4688 generic.go:334] "Generic (PLEG): container finished" podID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerID="790bb62ba08d82f30488e618af0f91498d400213c072f6be3ba7067393d3ad80" exitCode=0 Nov 25 12:39:17 crc kubenswrapper[4688]: I1125 12:39:17.992766 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdrj" event={"ID":"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0","Type":"ContainerDied","Data":"790bb62ba08d82f30488e618af0f91498d400213c072f6be3ba7067393d3ad80"} Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.010648 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nxvc6"] Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.013021 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.023019 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxvc6"] Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.157128 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6sf\" (UniqueName: \"kubernetes.io/projected/67af6482-25c8-4361-86ae-a4a814b36969-kube-api-access-2c6sf\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.157182 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-catalog-content\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.157263 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dgtg2" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.157316 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-utilities\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.259478 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-utilities\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.259617 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c6sf\" (UniqueName: \"kubernetes.io/projected/67af6482-25c8-4361-86ae-a4a814b36969-kube-api-access-2c6sf\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.259655 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-catalog-content\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.260078 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-utilities\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.260218 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-catalog-content\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.283837 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c6sf\" (UniqueName: \"kubernetes.io/projected/67af6482-25c8-4361-86ae-a4a814b36969-kube-api-access-2c6sf\") pod \"certified-operators-nxvc6\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.380347 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.414839 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqvwm"] Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.416969 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.450354 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqvwm"] Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.564578 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7h8\" (UniqueName: \"kubernetes.io/projected/6e7c23e2-95fc-4509-bd75-3486816e9aa1-kube-api-access-zd7h8\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.564695 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-utilities\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.564935 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-catalog-content\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.668228 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7h8\" (UniqueName: \"kubernetes.io/projected/6e7c23e2-95fc-4509-bd75-3486816e9aa1-kube-api-access-zd7h8\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.668664 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-utilities\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.668909 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-catalog-content\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.669398 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-utilities\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.670814 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-catalog-content\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.699442 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7h8\" (UniqueName: \"kubernetes.io/projected/6e7c23e2-95fc-4509-bd75-3486816e9aa1-kube-api-access-zd7h8\") pod \"community-operators-wqvwm\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:18 crc kubenswrapper[4688]: I1125 12:39:18.918969 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:19 crc kubenswrapper[4688]: I1125 12:39:19.020576 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4drg" event={"ID":"dd5c074d-ea43-461d-b137-686d1c19e8a8","Type":"ContainerStarted","Data":"579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950"} Nov 25 12:39:19 crc kubenswrapper[4688]: I1125 12:39:19.022252 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxvc6"] Nov 25 12:39:19 crc kubenswrapper[4688]: I1125 12:39:19.407878 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 12:39:19 crc kubenswrapper[4688]: W1125 12:39:19.443178 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e7c23e2_95fc_4509_bd75_3486816e9aa1.slice/crio-490121a0180ee66ccc03c8c74b1fb8bbbe05fdebf37c60ae6c89d7a89211863c WatchSource:0}: Error finding container 490121a0180ee66ccc03c8c74b1fb8bbbe05fdebf37c60ae6c89d7a89211863c: Status 404 returned error can't find the container with id 490121a0180ee66ccc03c8c74b1fb8bbbe05fdebf37c60ae6c89d7a89211863c Nov 25 12:39:19 crc kubenswrapper[4688]: I1125 12:39:19.448503 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqvwm"] Nov 25 12:39:20 crc kubenswrapper[4688]: I1125 12:39:20.039669 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdrj" event={"ID":"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0","Type":"ContainerStarted","Data":"00d74b58769fdc8b84b347ae59899452ed0a1c7c556c0ccd71715a009ef08140"} Nov 25 12:39:20 crc kubenswrapper[4688]: I1125 12:39:20.065415 4688 generic.go:334] "Generic (PLEG): container finished" podID="67af6482-25c8-4361-86ae-a4a814b36969" containerID="41855d143c71b4f2a185281bac712b7fc8df26c17c239fae0513d8901788e9f6" exitCode=0 Nov 25 12:39:20 crc kubenswrapper[4688]: I1125 12:39:20.065476 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvc6" event={"ID":"67af6482-25c8-4361-86ae-a4a814b36969","Type":"ContainerDied","Data":"41855d143c71b4f2a185281bac712b7fc8df26c17c239fae0513d8901788e9f6"} Nov 25 12:39:20 crc kubenswrapper[4688]: I1125 12:39:20.065501 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvc6" event={"ID":"67af6482-25c8-4361-86ae-a4a814b36969","Type":"ContainerStarted","Data":"f78a0ea84d097ecf641ebb73292cef71334aa110c8ef72478089cd232eac16ab"} Nov 25 12:39:20 crc kubenswrapper[4688]: I1125 12:39:20.069906 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqvwm" event={"ID":"6e7c23e2-95fc-4509-bd75-3486816e9aa1","Type":"ContainerStarted","Data":"490121a0180ee66ccc03c8c74b1fb8bbbe05fdebf37c60ae6c89d7a89211863c"} Nov 25 12:39:21 crc kubenswrapper[4688]: I1125 12:39:21.081636 4688 generic.go:334] "Generic (PLEG): container finished" podID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerID="ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7" exitCode=0 Nov 25 12:39:21 crc kubenswrapper[4688]: I1125 12:39:21.081876 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqvwm" event={"ID":"6e7c23e2-95fc-4509-bd75-3486816e9aa1","Type":"ContainerDied","Data":"ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7"} Nov 25 12:39:21 crc kubenswrapper[4688]: I1125 12:39:21.084230 4688 generic.go:334] "Generic (PLEG): container finished" podID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerID="579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950" exitCode=0 Nov 25 12:39:21 crc kubenswrapper[4688]: I1125 12:39:21.085303 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4drg" event={"ID":"dd5c074d-ea43-461d-b137-686d1c19e8a8","Type":"ContainerDied","Data":"579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950"} Nov 25 12:39:21 crc kubenswrapper[4688]: I1125 12:39:21.117317 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kcdrj" podStartSLOduration=3.511083877 podStartE2EDuration="7.117293491s" podCreationTimestamp="2025-11-25 12:39:14 +0000 UTC" firstStartedPulling="2025-11-25 12:39:15.929657907 +0000 UTC m=+1506.039286775" lastFinishedPulling="2025-11-25 12:39:19.535867521 +0000 UTC m=+1509.645496389" observedRunningTime="2025-11-25 12:39:21.112766238 +0000 UTC m=+1511.222395106" watchObservedRunningTime="2025-11-25 12:39:21.117293491 +0000 UTC m=+1511.226922359" Nov 25 12:39:22 crc kubenswrapper[4688]: I1125 12:39:22.168435 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:22 crc kubenswrapper[4688]: I1125 12:39:22.168756 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:22 crc kubenswrapper[4688]: I1125 12:39:22.217926 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:22 crc kubenswrapper[4688]: I1125 12:39:22.328441 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:22 crc kubenswrapper[4688]: I1125 12:39:22.328496 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:22 crc kubenswrapper[4688]: I1125 12:39:22.392050 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:23 crc kubenswrapper[4688]: I1125 12:39:23.165744 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:23 crc kubenswrapper[4688]: I1125 12:39:23.167928 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:24 crc kubenswrapper[4688]: I1125 12:39:24.776047 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:24 crc kubenswrapper[4688]: I1125 12:39:24.777189 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:24 crc kubenswrapper[4688]: I1125 12:39:24.828398 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:25 crc kubenswrapper[4688]: I1125 12:39:25.175966 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:25 crc kubenswrapper[4688]: I1125 12:39:25.443334 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 12:39:27 crc kubenswrapper[4688]: I1125 12:39:27.346067 4688 scope.go:117] "RemoveContainer" containerID="0cbbb336c486796eebc1fabe1a442d44297859a15c850e48808f250410a23ae7" Nov 25 12:39:29 crc kubenswrapper[4688]: I1125 12:39:29.014996 4688 scope.go:117] "RemoveContainer" containerID="1c0b82d9acbe647f35357bf526612ef6bff68e115f38bece75cf66375e5a2752" Nov 25 12:39:29 crc kubenswrapper[4688]: I1125 12:39:29.067313 4688 scope.go:117] "RemoveContainer" containerID="72030925f121a9025397fa04c3da9b472b7212412d88e6511af79e5af6638f1a" Nov 25 12:39:30 crc kubenswrapper[4688]: I1125 12:39:30.180662 4688 generic.go:334] "Generic (PLEG): container finished" podID="67af6482-25c8-4361-86ae-a4a814b36969" containerID="9d3344105243dc7a685c68cb3d1d997e12aa3733416d5ff7966ac5f8682f1e25" exitCode=0 Nov 25 12:39:30 crc kubenswrapper[4688]: I1125 12:39:30.180948 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvc6" event={"ID":"67af6482-25c8-4361-86ae-a4a814b36969","Type":"ContainerDied","Data":"9d3344105243dc7a685c68cb3d1d997e12aa3733416d5ff7966ac5f8682f1e25"} Nov 25 12:39:30 crc kubenswrapper[4688]: I1125 12:39:30.185547 4688 generic.go:334] "Generic (PLEG): container finished" podID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerID="c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be" exitCode=0 Nov 25 12:39:30 crc kubenswrapper[4688]: I1125 12:39:30.185620 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqvwm" event={"ID":"6e7c23e2-95fc-4509-bd75-3486816e9aa1","Type":"ContainerDied","Data":"c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be"} Nov 25 12:39:30 crc kubenswrapper[4688]: I1125 12:39:30.190665 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4drg" event={"ID":"dd5c074d-ea43-461d-b137-686d1c19e8a8","Type":"ContainerStarted","Data":"ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e"} Nov 25 12:39:30 crc kubenswrapper[4688]: I1125 12:39:30.221106 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t4drg" podStartSLOduration=4.247237895 podStartE2EDuration="16.221085783s" podCreationTimestamp="2025-11-25 12:39:14 +0000 UTC" firstStartedPulling="2025-11-25 12:39:17.041168276 +0000 UTC m=+1507.150797154" lastFinishedPulling="2025-11-25 12:39:29.015016174 +0000 UTC m=+1519.124645042" observedRunningTime="2025-11-25 12:39:30.218411471 +0000 UTC m=+1520.328040339" watchObservedRunningTime="2025-11-25 12:39:30.221085783 +0000 UTC m=+1520.330714661" Nov 25 12:39:31 crc kubenswrapper[4688]: I1125 12:39:31.304691 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-744bc4ddc8-58c5m" Nov 25 12:39:31 crc kubenswrapper[4688]: I1125 12:39:31.794714 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdrj"] Nov 25 12:39:31 crc kubenswrapper[4688]: I1125 12:39:31.795297 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kcdrj" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerName="registry-server" containerID="cri-o://00d74b58769fdc8b84b347ae59899452ed0a1c7c556c0ccd71715a009ef08140" gracePeriod=2 Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.234302 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvc6" event={"ID":"67af6482-25c8-4361-86ae-a4a814b36969","Type":"ContainerStarted","Data":"3b322a896c7a0b65a6538914d0343cc49da7f4f3fa27225f93af20e30d089cd1"} Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.238729 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqvwm" event={"ID":"6e7c23e2-95fc-4509-bd75-3486816e9aa1","Type":"ContainerStarted","Data":"6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a"} Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.243031 4688 generic.go:334] "Generic (PLEG): container finished" podID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerID="00d74b58769fdc8b84b347ae59899452ed0a1c7c556c0ccd71715a009ef08140" exitCode=0 Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.243074 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdrj" event={"ID":"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0","Type":"ContainerDied","Data":"00d74b58769fdc8b84b347ae59899452ed0a1c7c556c0ccd71715a009ef08140"} Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.267230 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nxvc6" podStartSLOduration=4.6085586240000005 podStartE2EDuration="15.267209082s" podCreationTimestamp="2025-11-25 12:39:17 +0000 UTC" firstStartedPulling="2025-11-25 12:39:20.072067981 +0000 UTC m=+1510.181696849" lastFinishedPulling="2025-11-25 12:39:30.730718439 +0000 UTC m=+1520.840347307" observedRunningTime="2025-11-25 12:39:32.253563925 +0000 UTC m=+1522.363192803" watchObservedRunningTime="2025-11-25 12:39:32.267209082 +0000 UTC m=+1522.376837950" Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.285454 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqvwm" podStartSLOduration=6.752619957 podStartE2EDuration="14.285436581s" podCreationTimestamp="2025-11-25 12:39:18 +0000 UTC" firstStartedPulling="2025-11-25 12:39:23.11242869 +0000 UTC m=+1513.222057558" lastFinishedPulling="2025-11-25 12:39:30.645245314 +0000 UTC m=+1520.754874182" observedRunningTime="2025-11-25 12:39:32.283845868 +0000 UTC m=+1522.393474736" watchObservedRunningTime="2025-11-25 12:39:32.285436581 +0000 UTC m=+1522.395065449" Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.335413 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.435218 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-utilities\") pod \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.435401 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45l6n\" (UniqueName: \"kubernetes.io/projected/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-kube-api-access-45l6n\") pod \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.435557 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-catalog-content\") pod \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\" (UID: \"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0\") " Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.436183 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-utilities" (OuterVolumeSpecName: "utilities") pod "2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" (UID: "2da97f9d-8b2a-42bd-8d36-405c4ff9efb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.441280 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-kube-api-access-45l6n" (OuterVolumeSpecName: "kube-api-access-45l6n") pod "2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" (UID: "2da97f9d-8b2a-42bd-8d36-405c4ff9efb0"). InnerVolumeSpecName "kube-api-access-45l6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.449170 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" (UID: "2da97f9d-8b2a-42bd-8d36-405c4ff9efb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.539109 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45l6n\" (UniqueName: \"kubernetes.io/projected/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-kube-api-access-45l6n\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.539177 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:32 crc kubenswrapper[4688]: I1125 12:39:32.539194 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:33 crc kubenswrapper[4688]: I1125 12:39:33.253860 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcdrj" Nov 25 12:39:33 crc kubenswrapper[4688]: I1125 12:39:33.254040 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdrj" event={"ID":"2da97f9d-8b2a-42bd-8d36-405c4ff9efb0","Type":"ContainerDied","Data":"9a01232233cc5e92689e02319048ecb95be002587fb4ea132dee8f231136d26c"} Nov 25 12:39:33 crc kubenswrapper[4688]: I1125 12:39:33.254412 4688 scope.go:117] "RemoveContainer" containerID="00d74b58769fdc8b84b347ae59899452ed0a1c7c556c0ccd71715a009ef08140" Nov 25 12:39:33 crc kubenswrapper[4688]: I1125 12:39:33.280961 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdrj"] Nov 25 12:39:33 crc kubenswrapper[4688]: I1125 12:39:33.288467 4688 scope.go:117] "RemoveContainer" containerID="790bb62ba08d82f30488e618af0f91498d400213c072f6be3ba7067393d3ad80" Nov 25 12:39:33 crc kubenswrapper[4688]: I1125 12:39:33.296661 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdrj"] Nov 25 12:39:33 crc kubenswrapper[4688]: I1125 12:39:33.316226 4688 scope.go:117] "RemoveContainer" containerID="2fe22dd86314905acb68a4022e14ae4436c691b6b3192ce7a792d8184f7f21a0" Nov 25 12:39:34 crc kubenswrapper[4688]: I1125 12:39:34.765619 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" path="/var/lib/kubelet/pods/2da97f9d-8b2a-42bd-8d36-405c4ff9efb0/volumes" Nov 25 12:39:35 crc kubenswrapper[4688]: I1125 12:39:35.005266 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:35 crc kubenswrapper[4688]: I1125 12:39:35.005359 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:39:36 crc kubenswrapper[4688]: I1125 12:39:36.058062 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t4drg" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:39:36 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Nov 25 12:39:36 crc kubenswrapper[4688]: > Nov 25 12:39:38 crc kubenswrapper[4688]: I1125 12:39:38.381811 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:38 crc kubenswrapper[4688]: I1125 12:39:38.382366 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:38 crc kubenswrapper[4688]: I1125 12:39:38.431328 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:38 crc kubenswrapper[4688]: I1125 12:39:38.919701 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:38 crc kubenswrapper[4688]: I1125 12:39:38.920029 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:38 crc kubenswrapper[4688]: I1125 12:39:38.978590 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:39 crc kubenswrapper[4688]: I1125 12:39:39.365898 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:39 crc kubenswrapper[4688]: I1125 12:39:39.371062 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.188033 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vksfl"] Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.188918 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vksfl" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerName="registry-server" containerID="cri-o://97cbb8faa0ef32aa2a97cd4e932caa884dfaf248b1be6cf338fb065e6e0f12c9" gracePeriod=2 Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.363380 4688 generic.go:334] "Generic (PLEG): container finished" podID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerID="97cbb8faa0ef32aa2a97cd4e932caa884dfaf248b1be6cf338fb065e6e0f12c9" exitCode=0 Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.363424 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vksfl" event={"ID":"4e94a2fb-bb97-4eef-a856-35db9d6f67a5","Type":"ContainerDied","Data":"97cbb8faa0ef32aa2a97cd4e932caa884dfaf248b1be6cf338fb065e6e0f12c9"} Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.677209 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.800710 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-catalog-content\") pod \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.800752 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjh8t\" (UniqueName: \"kubernetes.io/projected/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-kube-api-access-wjh8t\") pod \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.800873 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-utilities\") pod \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\" (UID: \"4e94a2fb-bb97-4eef-a856-35db9d6f67a5\") " Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.803638 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-utilities" (OuterVolumeSpecName: "utilities") pod "4e94a2fb-bb97-4eef-a856-35db9d6f67a5" (UID: "4e94a2fb-bb97-4eef-a856-35db9d6f67a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.807701 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-kube-api-access-wjh8t" (OuterVolumeSpecName: "kube-api-access-wjh8t") pod "4e94a2fb-bb97-4eef-a856-35db9d6f67a5" (UID: "4e94a2fb-bb97-4eef-a856-35db9d6f67a5"). InnerVolumeSpecName "kube-api-access-wjh8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.842870 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e94a2fb-bb97-4eef-a856-35db9d6f67a5" (UID: "4e94a2fb-bb97-4eef-a856-35db9d6f67a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.903229 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.903265 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.903280 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjh8t\" (UniqueName: \"kubernetes.io/projected/4e94a2fb-bb97-4eef-a856-35db9d6f67a5-kube-api-access-wjh8t\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.986463 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxvc6"] Nov 25 12:39:43 crc kubenswrapper[4688]: I1125 12:39:43.986745 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nxvc6" podUID="67af6482-25c8-4361-86ae-a4a814b36969" containerName="registry-server" containerID="cri-o://3b322a896c7a0b65a6538914d0343cc49da7f4f3fa27225f93af20e30d089cd1" gracePeriod=2 Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.195854 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kspxw"] Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.196139 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kspxw" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerName="registry-server" containerID="cri-o://71d8d6c8636fed2afdf020ea0e4088ed637f700dd8dd856fc8c11a88ba633c1b" gracePeriod=2 Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.376104 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vksfl" event={"ID":"4e94a2fb-bb97-4eef-a856-35db9d6f67a5","Type":"ContainerDied","Data":"036236cb5e0f014132178e1f9593aa474ca7d3c8e3b97921190e215388010eff"} Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.376430 4688 scope.go:117] "RemoveContainer" containerID="97cbb8faa0ef32aa2a97cd4e932caa884dfaf248b1be6cf338fb065e6e0f12c9" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.376154 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vksfl" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.390686 4688 generic.go:334] "Generic (PLEG): container finished" podID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerID="71d8d6c8636fed2afdf020ea0e4088ed637f700dd8dd856fc8c11a88ba633c1b" exitCode=0 Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.390747 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kspxw" event={"ID":"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7","Type":"ContainerDied","Data":"71d8d6c8636fed2afdf020ea0e4088ed637f700dd8dd856fc8c11a88ba633c1b"} Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.395403 4688 generic.go:334] "Generic (PLEG): container finished" podID="67af6482-25c8-4361-86ae-a4a814b36969" containerID="3b322a896c7a0b65a6538914d0343cc49da7f4f3fa27225f93af20e30d089cd1" exitCode=0 Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.395444 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvc6" event={"ID":"67af6482-25c8-4361-86ae-a4a814b36969","Type":"ContainerDied","Data":"3b322a896c7a0b65a6538914d0343cc49da7f4f3fa27225f93af20e30d089cd1"} Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.395469 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvc6" event={"ID":"67af6482-25c8-4361-86ae-a4a814b36969","Type":"ContainerDied","Data":"f78a0ea84d097ecf641ebb73292cef71334aa110c8ef72478089cd232eac16ab"} Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.395483 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78a0ea84d097ecf641ebb73292cef71334aa110c8ef72478089cd232eac16ab" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.485683 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.496932 4688 scope.go:117] "RemoveContainer" containerID="8bc6c0bdb15bfef8387b1a01c93e13ff8de2b2c320d4c049c6870c6ace37a029" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.501598 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vksfl"] Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.518272 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vksfl"] Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.541237 4688 scope.go:117] "RemoveContainer" containerID="0170e2d66d3ead0b5af27b1323186b747a1ce93282089f6c1a04307645b6eee0" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.590813 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqvwm"] Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.591059 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqvwm" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerName="registry-server" containerID="cri-o://6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a" gracePeriod=2 Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.617206 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-utilities\") pod \"67af6482-25c8-4361-86ae-a4a814b36969\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.617755 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c6sf\" (UniqueName: \"kubernetes.io/projected/67af6482-25c8-4361-86ae-a4a814b36969-kube-api-access-2c6sf\") pod \"67af6482-25c8-4361-86ae-a4a814b36969\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.617848 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-catalog-content\") pod \"67af6482-25c8-4361-86ae-a4a814b36969\" (UID: \"67af6482-25c8-4361-86ae-a4a814b36969\") " Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.618052 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-utilities" (OuterVolumeSpecName: "utilities") pod "67af6482-25c8-4361-86ae-a4a814b36969" (UID: "67af6482-25c8-4361-86ae-a4a814b36969"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.618409 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.618625 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.623655 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67af6482-25c8-4361-86ae-a4a814b36969-kube-api-access-2c6sf" (OuterVolumeSpecName: "kube-api-access-2c6sf") pod "67af6482-25c8-4361-86ae-a4a814b36969" (UID: "67af6482-25c8-4361-86ae-a4a814b36969"). InnerVolumeSpecName "kube-api-access-2c6sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.678612 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67af6482-25c8-4361-86ae-a4a814b36969" (UID: "67af6482-25c8-4361-86ae-a4a814b36969"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.720407 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-catalog-content\") pod \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.720578 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-utilities\") pod \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.720628 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmp65\" (UniqueName: \"kubernetes.io/projected/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-kube-api-access-tmp65\") pod \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\" (UID: \"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7\") " Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.721194 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c6sf\" (UniqueName: \"kubernetes.io/projected/67af6482-25c8-4361-86ae-a4a814b36969-kube-api-access-2c6sf\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.721531 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67af6482-25c8-4361-86ae-a4a814b36969-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.721180 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-utilities" (OuterVolumeSpecName: "utilities") pod "45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" (UID: "45ba77c3-6f63-461a-bd45-4b8d4b02fdd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.724210 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-kube-api-access-tmp65" (OuterVolumeSpecName: "kube-api-access-tmp65") pod "45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" (UID: "45ba77c3-6f63-461a-bd45-4b8d4b02fdd7"). InnerVolumeSpecName "kube-api-access-tmp65". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.757032 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" path="/var/lib/kubelet/pods/4e94a2fb-bb97-4eef-a856-35db9d6f67a5/volumes" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.775724 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" (UID: "45ba77c3-6f63-461a-bd45-4b8d4b02fdd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.823854 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.823894 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:44 crc kubenswrapper[4688]: I1125 12:39:44.823908 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmp65\" (UniqueName: \"kubernetes.io/projected/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7-kube-api-access-tmp65\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.010107 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.129685 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-utilities\") pod \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.129724 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-catalog-content\") pod \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.129774 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7h8\" (UniqueName: \"kubernetes.io/projected/6e7c23e2-95fc-4509-bd75-3486816e9aa1-kube-api-access-zd7h8\") pod \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\" (UID: \"6e7c23e2-95fc-4509-bd75-3486816e9aa1\") " Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.130673 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-utilities" (OuterVolumeSpecName: "utilities") pod "6e7c23e2-95fc-4509-bd75-3486816e9aa1" (UID: "6e7c23e2-95fc-4509-bd75-3486816e9aa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.135813 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7c23e2-95fc-4509-bd75-3486816e9aa1-kube-api-access-zd7h8" (OuterVolumeSpecName: "kube-api-access-zd7h8") pod "6e7c23e2-95fc-4509-bd75-3486816e9aa1" (UID: "6e7c23e2-95fc-4509-bd75-3486816e9aa1"). InnerVolumeSpecName "kube-api-access-zd7h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.193751 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e7c23e2-95fc-4509-bd75-3486816e9aa1" (UID: "6e7c23e2-95fc-4509-bd75-3486816e9aa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.232320 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.232398 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7c23e2-95fc-4509-bd75-3486816e9aa1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.232423 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd7h8\" (UniqueName: \"kubernetes.io/projected/6e7c23e2-95fc-4509-bd75-3486816e9aa1-kube-api-access-zd7h8\") on node \"crc\" DevicePath \"\"" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.408374 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kspxw" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.408423 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kspxw" event={"ID":"45ba77c3-6f63-461a-bd45-4b8d4b02fdd7","Type":"ContainerDied","Data":"eed2bf1cda88b864da29f289a7f276b2c589923c0edace6ed53001efbac4ef5d"} Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.408643 4688 scope.go:117] "RemoveContainer" containerID="71d8d6c8636fed2afdf020ea0e4088ed637f700dd8dd856fc8c11a88ba633c1b" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.413158 4688 generic.go:334] "Generic (PLEG): container finished" podID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerID="6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a" exitCode=0 Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.413235 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqvwm" event={"ID":"6e7c23e2-95fc-4509-bd75-3486816e9aa1","Type":"ContainerDied","Data":"6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a"} Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.413263 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqvwm" event={"ID":"6e7c23e2-95fc-4509-bd75-3486816e9aa1","Type":"ContainerDied","Data":"490121a0180ee66ccc03c8c74b1fb8bbbe05fdebf37c60ae6c89d7a89211863c"} Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.413354 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqvwm" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.420013 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxvc6" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.431748 4688 scope.go:117] "RemoveContainer" containerID="5bfe1a59dd8724acfc1ee2e74cef9e650fa8c5114c18f927b4fcd54e978f47d8" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.456129 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kspxw"] Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.465891 4688 scope.go:117] "RemoveContainer" containerID="ca215dfb7e164c885bcc85b59dd0c9c8ce5de8c90d11dfa3453a5f5ccd99020c" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.485897 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kspxw"] Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.509423 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxvc6"] Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.510467 4688 scope.go:117] "RemoveContainer" containerID="6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.520047 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nxvc6"] Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.529836 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqvwm"] Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.539539 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqvwm"] Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.551981 4688 scope.go:117] "RemoveContainer" containerID="c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.578913 4688 scope.go:117] "RemoveContainer" containerID="ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.628382 4688 scope.go:117] "RemoveContainer" containerID="6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a" Nov 25 12:39:45 crc kubenswrapper[4688]: E1125 12:39:45.628752 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a\": container with ID starting with 6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a not found: ID does not exist" containerID="6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.628786 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a"} err="failed to get container status \"6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a\": rpc error: code = NotFound desc = could not find container \"6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a\": container with ID starting with 6ebad41ae945720c5d87d4f7de28d8e9256336c91598861ebbc16551ac3c1a9a not found: ID does not exist" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.628807 4688 scope.go:117] "RemoveContainer" containerID="c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be" Nov 25 12:39:45 crc kubenswrapper[4688]: E1125 12:39:45.629054 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be\": container with ID starting with c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be not found: ID does not exist" containerID="c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.629076 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be"} err="failed to get container status \"c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be\": rpc error: code = NotFound desc = could not find container \"c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be\": container with ID starting with c4235c9ba803cbea3acfcbd32add56aea11e3839083e87a6c56bdde0d48575be not found: ID does not exist" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.629090 4688 scope.go:117] "RemoveContainer" containerID="ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7" Nov 25 12:39:45 crc kubenswrapper[4688]: E1125 12:39:45.629742 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7\": container with ID starting with ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7 not found: ID does not exist" containerID="ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7" Nov 25 12:39:45 crc kubenswrapper[4688]: I1125 12:39:45.629807 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7"} err="failed to get container status \"ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7\": rpc error: code = NotFound desc = could not find container \"ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7\": container with ID starting with ce145ab43405b7968a82272ec2c3d123daa96d77f8bd84328d8f4b95f324fdc7 not found: ID does not exist" Nov 25 12:39:46 crc kubenswrapper[4688]: I1125 12:39:46.056476 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t4drg" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:39:46 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Nov 25 12:39:46 crc kubenswrapper[4688]: > Nov 25 12:39:46 crc kubenswrapper[4688]: I1125 12:39:46.756537 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" path="/var/lib/kubelet/pods/45ba77c3-6f63-461a-bd45-4b8d4b02fdd7/volumes" Nov 25 12:39:46 crc kubenswrapper[4688]: I1125 12:39:46.757623 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67af6482-25c8-4361-86ae-a4a814b36969" path="/var/lib/kubelet/pods/67af6482-25c8-4361-86ae-a4a814b36969/volumes" Nov 25 12:39:46 crc kubenswrapper[4688]: I1125 12:39:46.758341 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" path="/var/lib/kubelet/pods/6e7c23e2-95fc-4509-bd75-3486816e9aa1/volumes" Nov 25 12:39:56 crc kubenswrapper[4688]: I1125 12:39:56.055047 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t4drg" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="registry-server" probeResult="failure" output=< Nov 25 12:39:56 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Nov 25 12:39:56 crc kubenswrapper[4688]: > Nov 25 12:40:05 crc kubenswrapper[4688]: I1125 12:40:05.073022 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:40:05 crc kubenswrapper[4688]: I1125 12:40:05.131769 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:40:05 crc kubenswrapper[4688]: I1125 12:40:05.311020 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4drg"] Nov 25 12:40:06 crc kubenswrapper[4688]: I1125 12:40:06.627875 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t4drg" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="registry-server" containerID="cri-o://ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e" gracePeriod=2 Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.176954 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.283222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xsfh\" (UniqueName: \"kubernetes.io/projected/dd5c074d-ea43-461d-b137-686d1c19e8a8-kube-api-access-8xsfh\") pod \"dd5c074d-ea43-461d-b137-686d1c19e8a8\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.283487 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-utilities\") pod \"dd5c074d-ea43-461d-b137-686d1c19e8a8\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.283595 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-catalog-content\") pod \"dd5c074d-ea43-461d-b137-686d1c19e8a8\" (UID: \"dd5c074d-ea43-461d-b137-686d1c19e8a8\") " Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.283922 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-utilities" (OuterVolumeSpecName: "utilities") pod "dd5c074d-ea43-461d-b137-686d1c19e8a8" (UID: "dd5c074d-ea43-461d-b137-686d1c19e8a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.284285 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.289542 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5c074d-ea43-461d-b137-686d1c19e8a8-kube-api-access-8xsfh" (OuterVolumeSpecName: "kube-api-access-8xsfh") pod "dd5c074d-ea43-461d-b137-686d1c19e8a8" (UID: "dd5c074d-ea43-461d-b137-686d1c19e8a8"). InnerVolumeSpecName "kube-api-access-8xsfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.379076 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd5c074d-ea43-461d-b137-686d1c19e8a8" (UID: "dd5c074d-ea43-461d-b137-686d1c19e8a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.386917 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xsfh\" (UniqueName: \"kubernetes.io/projected/dd5c074d-ea43-461d-b137-686d1c19e8a8-kube-api-access-8xsfh\") on node \"crc\" DevicePath \"\"" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.386958 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5c074d-ea43-461d-b137-686d1c19e8a8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.640624 4688 generic.go:334] "Generic (PLEG): container finished" podID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerID="ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e" exitCode=0 Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.640669 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4drg" event={"ID":"dd5c074d-ea43-461d-b137-686d1c19e8a8","Type":"ContainerDied","Data":"ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e"} Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.640698 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4drg" event={"ID":"dd5c074d-ea43-461d-b137-686d1c19e8a8","Type":"ContainerDied","Data":"ddc28c0a7a25f250f016b749a782635ae911827a2f621317d4b17023c1edb2e3"} Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.640716 4688 scope.go:117] "RemoveContainer" containerID="ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.640850 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4drg" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.688393 4688 scope.go:117] "RemoveContainer" containerID="579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.695782 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4drg"] Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.713549 4688 scope.go:117] "RemoveContainer" containerID="db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.721039 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t4drg"] Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.758332 4688 scope.go:117] "RemoveContainer" containerID="ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e" Nov 25 12:40:07 crc kubenswrapper[4688]: E1125 12:40:07.758899 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e\": container with ID starting with ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e not found: ID does not exist" containerID="ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.758958 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e"} err="failed to get container status \"ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e\": rpc error: code = NotFound desc = could not find container \"ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e\": container with ID starting with ca834d9a76ed2a569501e73a0e9ecdf3dcaeaf0435c53cf7a8832e2ef5ea0f7e not found: ID does not exist" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.758992 4688 scope.go:117] "RemoveContainer" containerID="579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950" Nov 25 12:40:07 crc kubenswrapper[4688]: E1125 12:40:07.759367 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950\": container with ID starting with 579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950 not found: ID does not exist" containerID="579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.759407 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950"} err="failed to get container status \"579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950\": rpc error: code = NotFound desc = could not find container \"579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950\": container with ID starting with 579a89d3262e8987ba3f09249b237a3866ec2ee7c410dc1ff0e379013e778950 not found: ID does not exist" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.759428 4688 scope.go:117] "RemoveContainer" containerID="db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a" Nov 25 12:40:07 crc kubenswrapper[4688]: E1125 12:40:07.759709 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a\": container with ID starting with db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a not found: ID does not exist" containerID="db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a" Nov 25 12:40:07 crc kubenswrapper[4688]: I1125 12:40:07.759744 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a"} err="failed to get container status \"db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a\": rpc error: code = NotFound desc = could not find container \"db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a\": container with ID starting with db917a5d93c0f033dcfa73f141a38a540ce6b88785496fa2fbf38e35f9694a9a not found: ID does not exist" Nov 25 12:40:08 crc kubenswrapper[4688]: I1125 12:40:08.751599 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" path="/var/lib/kubelet/pods/dd5c074d-ea43-461d-b137-686d1c19e8a8/volumes" Nov 25 12:40:23 crc kubenswrapper[4688]: I1125 12:40:23.813279 4688 generic.go:334] "Generic (PLEG): container finished" podID="9b744290-1dac-4fcf-99d7-6a4a7b2287f6" containerID="e4b99662683e41ed067205bef990028c1596404eab48856b79dc86f452e487c1" exitCode=0 Nov 25 12:40:23 crc kubenswrapper[4688]: I1125 12:40:23.813372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" event={"ID":"9b744290-1dac-4fcf-99d7-6a4a7b2287f6","Type":"ContainerDied","Data":"e4b99662683e41ed067205bef990028c1596404eab48856b79dc86f452e487c1"} Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.276865 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.437329 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cd7d\" (UniqueName: \"kubernetes.io/projected/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-kube-api-access-4cd7d\") pod \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.437380 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-bootstrap-combined-ca-bundle\") pod \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.437592 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-ssh-key\") pod \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.437661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-inventory\") pod \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\" (UID: \"9b744290-1dac-4fcf-99d7-6a4a7b2287f6\") " Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.444378 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9b744290-1dac-4fcf-99d7-6a4a7b2287f6" (UID: "9b744290-1dac-4fcf-99d7-6a4a7b2287f6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.446007 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-kube-api-access-4cd7d" (OuterVolumeSpecName: "kube-api-access-4cd7d") pod "9b744290-1dac-4fcf-99d7-6a4a7b2287f6" (UID: "9b744290-1dac-4fcf-99d7-6a4a7b2287f6"). InnerVolumeSpecName "kube-api-access-4cd7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.466217 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9b744290-1dac-4fcf-99d7-6a4a7b2287f6" (UID: "9b744290-1dac-4fcf-99d7-6a4a7b2287f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.468075 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-inventory" (OuterVolumeSpecName: "inventory") pod "9b744290-1dac-4fcf-99d7-6a4a7b2287f6" (UID: "9b744290-1dac-4fcf-99d7-6a4a7b2287f6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.540311 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.540717 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.540730 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cd7d\" (UniqueName: \"kubernetes.io/projected/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-kube-api-access-4cd7d\") on node \"crc\" DevicePath \"\"" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.540742 4688 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b744290-1dac-4fcf-99d7-6a4a7b2287f6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.839143 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" event={"ID":"9b744290-1dac-4fcf-99d7-6a4a7b2287f6","Type":"ContainerDied","Data":"174943e13d8c7d5a68b94f351e71468ba2d886be2445cbf70981da73efc4c17d"} Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.839180 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="174943e13d8c7d5a68b94f351e71468ba2d886be2445cbf70981da73efc4c17d" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.839186 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.938894 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd"] Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939325 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939348 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939362 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939371 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939386 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939394 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939409 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939418 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939428 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939436 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939454 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67af6482-25c8-4361-86ae-a4a814b36969" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939462 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="67af6482-25c8-4361-86ae-a4a814b36969" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939475 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67af6482-25c8-4361-86ae-a4a814b36969" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939482 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="67af6482-25c8-4361-86ae-a4a814b36969" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939502 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939510 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939526 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939550 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939567 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939574 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939602 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939609 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939626 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939633 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939651 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939659 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939677 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b744290-1dac-4fcf-99d7-6a4a7b2287f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939688 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b744290-1dac-4fcf-99d7-6a4a7b2287f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939707 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939714 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939728 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67af6482-25c8-4361-86ae-a4a814b36969" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939735 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="67af6482-25c8-4361-86ae-a4a814b36969" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939747 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939755 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerName="extract-content" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939771 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939778 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerName="extract-utilities" Nov 25 12:40:25 crc kubenswrapper[4688]: E1125 12:40:25.939793 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.939799 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.940000 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da97f9d-8b2a-42bd-8d36-405c4ff9efb0" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.940020 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e94a2fb-bb97-4eef-a856-35db9d6f67a5" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.940040 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b744290-1dac-4fcf-99d7-6a4a7b2287f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.940058 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="67af6482-25c8-4361-86ae-a4a814b36969" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.940069 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7c23e2-95fc-4509-bd75-3486816e9aa1" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.940085 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ba77c3-6f63-461a-bd45-4b8d4b02fdd7" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.940098 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5c074d-ea43-461d-b137-686d1c19e8a8" containerName="registry-server" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.940837 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.943567 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.943759 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.944190 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.944280 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:40:25 crc kubenswrapper[4688]: I1125 12:40:25.985211 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd"] Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.050994 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.051110 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppz99\" (UniqueName: \"kubernetes.io/projected/6394a29c-847b-438c-826a-03443a7bb430-kube-api-access-ppz99\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.051188 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.152311 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.152405 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppz99\" (UniqueName: \"kubernetes.io/projected/6394a29c-847b-438c-826a-03443a7bb430-kube-api-access-ppz99\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.152469 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.156484 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.156501 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.170462 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppz99\" (UniqueName: \"kubernetes.io/projected/6394a29c-847b-438c-826a-03443a7bb430-kube-api-access-ppz99\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.255768 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.768110 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd"] Nov 25 12:40:26 crc kubenswrapper[4688]: W1125 12:40:26.769211 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6394a29c_847b_438c_826a_03443a7bb430.slice/crio-b460779290ab565a4434708f9142c9dce53aa5d71bcafa282c73841ff492e027 WatchSource:0}: Error finding container b460779290ab565a4434708f9142c9dce53aa5d71bcafa282c73841ff492e027: Status 404 returned error can't find the container with id b460779290ab565a4434708f9142c9dce53aa5d71bcafa282c73841ff492e027 Nov 25 12:40:26 crc kubenswrapper[4688]: I1125 12:40:26.848995 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" event={"ID":"6394a29c-847b-438c-826a-03443a7bb430","Type":"ContainerStarted","Data":"b460779290ab565a4434708f9142c9dce53aa5d71bcafa282c73841ff492e027"} Nov 25 12:40:27 crc kubenswrapper[4688]: I1125 12:40:27.864065 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" event={"ID":"6394a29c-847b-438c-826a-03443a7bb430","Type":"ContainerStarted","Data":"2a19a9bf1b773f05b52d3dc420241f1fdab489bede1b1895d95aa153b123f159"} Nov 25 12:40:27 crc kubenswrapper[4688]: I1125 12:40:27.882115 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" podStartSLOduration=2.446553875 podStartE2EDuration="2.882099991s" podCreationTimestamp="2025-11-25 12:40:25 +0000 UTC" firstStartedPulling="2025-11-25 12:40:26.772730283 +0000 UTC m=+1576.882359151" lastFinishedPulling="2025-11-25 12:40:27.208276399 +0000 UTC m=+1577.317905267" observedRunningTime="2025-11-25 12:40:27.878895185 +0000 UTC m=+1577.988524053" watchObservedRunningTime="2025-11-25 12:40:27.882099991 +0000 UTC m=+1577.991728859" Nov 25 12:40:29 crc kubenswrapper[4688]: I1125 12:40:29.132286 4688 scope.go:117] "RemoveContainer" containerID="75cf2ccc4f8b654add849ae66a3331317f5b0939f81571bde99647f759ec8b17" Nov 25 12:40:29 crc kubenswrapper[4688]: I1125 12:40:29.176685 4688 scope.go:117] "RemoveContainer" containerID="f774a1faec289c33c0d287a9eefc90cae391208c668c240a06275b1ef4a53e22" Nov 25 12:40:47 crc kubenswrapper[4688]: I1125 12:40:47.854244 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:40:47 crc kubenswrapper[4688]: I1125 12:40:47.855988 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.079743 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vglmq"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.092887 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kr2hz"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.104062 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-76cf-account-create-2z6wp"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.113085 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-76cf-account-create-2z6wp"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.120967 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vglmq"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.128842 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8fq9b"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.136241 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kr2hz"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.147130 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-53c3-account-create-b6rlg"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.156555 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4d6a-account-create-9sw4q"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.165662 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8fq9b"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.175115 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4d6a-account-create-9sw4q"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.187294 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-53c3-account-create-b6rlg"] Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.853904 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:41:17 crc kubenswrapper[4688]: I1125 12:41:17.854275 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:41:18 crc kubenswrapper[4688]: I1125 12:41:18.756412 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797bab00-a9d6-4126-87d4-da7a46c3d318" path="/var/lib/kubelet/pods/797bab00-a9d6-4126-87d4-da7a46c3d318/volumes" Nov 25 12:41:18 crc kubenswrapper[4688]: I1125 12:41:18.757889 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b487b4e-5b03-439f-80f9-53c3e37121c8" path="/var/lib/kubelet/pods/7b487b4e-5b03-439f-80f9-53c3e37121c8/volumes" Nov 25 12:41:18 crc kubenswrapper[4688]: I1125 12:41:18.759083 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9087e75c-690f-4f83-b63a-af99c771cd3a" path="/var/lib/kubelet/pods/9087e75c-690f-4f83-b63a-af99c771cd3a/volumes" Nov 25 12:41:18 crc kubenswrapper[4688]: I1125 12:41:18.760269 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b3f267-42b4-46bc-a0e1-195315d0a782" path="/var/lib/kubelet/pods/97b3f267-42b4-46bc-a0e1-195315d0a782/volumes" Nov 25 12:41:18 crc kubenswrapper[4688]: I1125 12:41:18.762436 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4495873-670c-487e-9989-b6c65bbd0c04" path="/var/lib/kubelet/pods/b4495873-670c-487e-9989-b6c65bbd0c04/volumes" Nov 25 12:41:18 crc kubenswrapper[4688]: I1125 12:41:18.763805 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa319425-2bf3-486e-b80c-52d377a48462" path="/var/lib/kubelet/pods/fa319425-2bf3-486e-b80c-52d377a48462/volumes" Nov 25 12:41:29 crc kubenswrapper[4688]: I1125 12:41:29.353337 4688 scope.go:117] "RemoveContainer" containerID="e5cb545abc92b3e3e2ab7982202ffe13f3e8c5f2ffba02c37bdd4e9c295005c7" Nov 25 12:41:29 crc kubenswrapper[4688]: I1125 12:41:29.406242 4688 scope.go:117] "RemoveContainer" containerID="987a1cf4240b583095ea403bcad3e3341f724add0adc0f0f2986f67e1cfc875b" Nov 25 12:41:29 crc kubenswrapper[4688]: I1125 12:41:29.464660 4688 scope.go:117] "RemoveContainer" containerID="9a5253ee3b047f77fca5b89fd4f9a5c11dc374e599b3f219fa0f2d09d9a73bc7" Nov 25 12:41:29 crc kubenswrapper[4688]: I1125 12:41:29.513512 4688 scope.go:117] "RemoveContainer" containerID="e6a626027b04d17da404309e4e66bf2873f532ce4108d039cf2b2c6253c40c27" Nov 25 12:41:29 crc kubenswrapper[4688]: I1125 12:41:29.583347 4688 scope.go:117] "RemoveContainer" containerID="3219f83158aa64779a6f470c70b3713565f284ac600de8b731574eec53073fb9" Nov 25 12:41:29 crc kubenswrapper[4688]: I1125 12:41:29.625845 4688 scope.go:117] "RemoveContainer" containerID="1b944fb434524bd401c6ba3d3b183461a0de872fcba04560b791fd3ec3e98fd8" Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.060347 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4668-account-create-vvbfr"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.082601 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rn78b"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.101951 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4668-account-create-vvbfr"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.111390 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bz5zq"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.121986 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-ee49-account-create-z8krp"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.131659 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-dbv7f"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.141516 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cab6-account-create-v69wv"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.149840 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rn78b"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.157373 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7cdc-account-create-rwhtz"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.164897 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-j2kst"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.172660 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bz5zq"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.181829 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-ee49-account-create-z8krp"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.191285 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-dbv7f"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.201134 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7cdc-account-create-rwhtz"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.210889 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cab6-account-create-v69wv"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.219435 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-j2kst"] Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.753978 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01cff46f-6371-4d95-b13d-e9b7a337c230" path="/var/lib/kubelet/pods/01cff46f-6371-4d95-b13d-e9b7a337c230/volumes" Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.756825 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d85141d-b092-4062-a640-0192faa87846" path="/var/lib/kubelet/pods/0d85141d-b092-4062-a640-0192faa87846/volumes" Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.758027 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5088e7-042f-48f0-97d3-d97e820ac314" path="/var/lib/kubelet/pods/0f5088e7-042f-48f0-97d3-d97e820ac314/volumes" Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.758857 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2495ae3d-a5dd-4c69-86d8-6081174afdc0" path="/var/lib/kubelet/pods/2495ae3d-a5dd-4c69-86d8-6081174afdc0/volumes" Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.759716 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e9927b-64ba-43f5-a5e7-2061e1288e8d" path="/var/lib/kubelet/pods/38e9927b-64ba-43f5-a5e7-2061e1288e8d/volumes" Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.761432 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0ba682-5263-45cb-afc5-ca4d3f4b1354" path="/var/lib/kubelet/pods/3a0ba682-5263-45cb-afc5-ca4d3f4b1354/volumes" Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.762346 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb3a057-3576-4c65-a327-c3325780d24a" path="/var/lib/kubelet/pods/9eb3a057-3576-4c65-a327-c3325780d24a/volumes" Nov 25 12:41:40 crc kubenswrapper[4688]: I1125 12:41:40.763235 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f060668a-f36b-47f8-88eb-f7ecc06a491c" path="/var/lib/kubelet/pods/f060668a-f36b-47f8-88eb-f7ecc06a491c/volumes" Nov 25 12:41:47 crc kubenswrapper[4688]: I1125 12:41:47.853545 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:41:47 crc kubenswrapper[4688]: I1125 12:41:47.854072 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:41:47 crc kubenswrapper[4688]: I1125 12:41:47.854133 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:41:47 crc kubenswrapper[4688]: I1125 12:41:47.854956 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:41:47 crc kubenswrapper[4688]: I1125 12:41:47.855016 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" gracePeriod=600 Nov 25 12:41:47 crc kubenswrapper[4688]: E1125 12:41:47.991383 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:41:48 crc kubenswrapper[4688]: I1125 12:41:48.711356 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" exitCode=0 Nov 25 12:41:48 crc kubenswrapper[4688]: I1125 12:41:48.711395 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc"} Nov 25 12:41:48 crc kubenswrapper[4688]: I1125 12:41:48.711425 4688 scope.go:117] "RemoveContainer" containerID="adac398a94564aa341b35f325f1f99096f13126fe72668e940408e0ca6a84914" Nov 25 12:41:48 crc kubenswrapper[4688]: I1125 12:41:48.712483 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:41:48 crc kubenswrapper[4688]: E1125 12:41:48.713096 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:41:49 crc kubenswrapper[4688]: I1125 12:41:49.034917 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rqdck"] Nov 25 12:41:49 crc kubenswrapper[4688]: I1125 12:41:49.047576 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rqdck"] Nov 25 12:41:50 crc kubenswrapper[4688]: I1125 12:41:50.038873 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rj4gb"] Nov 25 12:41:50 crc kubenswrapper[4688]: I1125 12:41:50.057157 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rj4gb"] Nov 25 12:41:50 crc kubenswrapper[4688]: I1125 12:41:50.757191 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d582d3-e8f8-49a4-a48c-9c07f6083db5" path="/var/lib/kubelet/pods/79d582d3-e8f8-49a4-a48c-9c07f6083db5/volumes" Nov 25 12:41:50 crc kubenswrapper[4688]: I1125 12:41:50.758253 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5ac079-6de6-4cc5-aad0-86bf3b34feb0" path="/var/lib/kubelet/pods/ab5ac079-6de6-4cc5-aad0-86bf3b34feb0/volumes" Nov 25 12:42:01 crc kubenswrapper[4688]: I1125 12:42:01.739489 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:42:01 crc kubenswrapper[4688]: E1125 12:42:01.740263 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:42:12 crc kubenswrapper[4688]: I1125 12:42:12.983229 4688 generic.go:334] "Generic (PLEG): container finished" podID="6394a29c-847b-438c-826a-03443a7bb430" containerID="2a19a9bf1b773f05b52d3dc420241f1fdab489bede1b1895d95aa153b123f159" exitCode=0 Nov 25 12:42:12 crc kubenswrapper[4688]: I1125 12:42:12.983294 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" event={"ID":"6394a29c-847b-438c-826a-03443a7bb430","Type":"ContainerDied","Data":"2a19a9bf1b773f05b52d3dc420241f1fdab489bede1b1895d95aa153b123f159"} Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.422575 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.428890 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-ssh-key\") pod \"6394a29c-847b-438c-826a-03443a7bb430\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.428958 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppz99\" (UniqueName: \"kubernetes.io/projected/6394a29c-847b-438c-826a-03443a7bb430-kube-api-access-ppz99\") pod \"6394a29c-847b-438c-826a-03443a7bb430\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.429056 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-inventory\") pod \"6394a29c-847b-438c-826a-03443a7bb430\" (UID: \"6394a29c-847b-438c-826a-03443a7bb430\") " Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.435620 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6394a29c-847b-438c-826a-03443a7bb430-kube-api-access-ppz99" (OuterVolumeSpecName: "kube-api-access-ppz99") pod "6394a29c-847b-438c-826a-03443a7bb430" (UID: "6394a29c-847b-438c-826a-03443a7bb430"). InnerVolumeSpecName "kube-api-access-ppz99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.461707 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6394a29c-847b-438c-826a-03443a7bb430" (UID: "6394a29c-847b-438c-826a-03443a7bb430"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.473680 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-inventory" (OuterVolumeSpecName: "inventory") pod "6394a29c-847b-438c-826a-03443a7bb430" (UID: "6394a29c-847b-438c-826a-03443a7bb430"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.530984 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.531017 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppz99\" (UniqueName: \"kubernetes.io/projected/6394a29c-847b-438c-826a-03443a7bb430-kube-api-access-ppz99\") on node \"crc\" DevicePath \"\"" Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.531033 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6394a29c-847b-438c-826a-03443a7bb430-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:42:14 crc kubenswrapper[4688]: I1125 12:42:14.740594 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:42:14 crc kubenswrapper[4688]: E1125 12:42:14.740933 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.002313 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" event={"ID":"6394a29c-847b-438c-826a-03443a7bb430","Type":"ContainerDied","Data":"b460779290ab565a4434708f9142c9dce53aa5d71bcafa282c73841ff492e027"} Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.002352 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b460779290ab565a4434708f9142c9dce53aa5d71bcafa282c73841ff492e027" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.002412 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.077241 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz"] Nov 25 12:42:15 crc kubenswrapper[4688]: E1125 12:42:15.077943 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6394a29c-847b-438c-826a-03443a7bb430" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.077974 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6394a29c-847b-438c-826a-03443a7bb430" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.078247 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6394a29c-847b-438c-826a-03443a7bb430" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.079067 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.086977 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.087209 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.087484 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.087865 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.112724 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz"] Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.143017 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.143127 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgjzl\" (UniqueName: \"kubernetes.io/projected/824692a9-2ed3-41c1-a34d-52ae721df261-kube-api-access-hgjzl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.143169 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.244776 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgjzl\" (UniqueName: \"kubernetes.io/projected/824692a9-2ed3-41c1-a34d-52ae721df261-kube-api-access-hgjzl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.245071 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.245130 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.248951 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.253143 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.271944 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgjzl\" (UniqueName: \"kubernetes.io/projected/824692a9-2ed3-41c1-a34d-52ae721df261-kube-api-access-hgjzl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.413137 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.751946 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz"] Nov 25 12:42:15 crc kubenswrapper[4688]: I1125 12:42:15.753063 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:42:16 crc kubenswrapper[4688]: I1125 12:42:16.012958 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" event={"ID":"824692a9-2ed3-41c1-a34d-52ae721df261","Type":"ContainerStarted","Data":"b1613c1ed67a4c835c1e11d21f824e640658854092c1cb7874d38946488e8773"} Nov 25 12:42:17 crc kubenswrapper[4688]: I1125 12:42:17.028627 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" event={"ID":"824692a9-2ed3-41c1-a34d-52ae721df261","Type":"ContainerStarted","Data":"bc703e5d11e09caaed9eb33899d3eff4503d8da6143363a76a7e93c8d44c1fba"} Nov 25 12:42:17 crc kubenswrapper[4688]: I1125 12:42:17.062714 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" podStartSLOduration=1.5331638079999999 podStartE2EDuration="2.062691399s" podCreationTimestamp="2025-11-25 12:42:15 +0000 UTC" firstStartedPulling="2025-11-25 12:42:15.752812656 +0000 UTC m=+1685.862441524" lastFinishedPulling="2025-11-25 12:42:16.282340237 +0000 UTC m=+1686.391969115" observedRunningTime="2025-11-25 12:42:17.050724738 +0000 UTC m=+1687.160353646" watchObservedRunningTime="2025-11-25 12:42:17.062691399 +0000 UTC m=+1687.172320287" Nov 25 12:42:25 crc kubenswrapper[4688]: I1125 12:42:25.740682 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:42:25 crc kubenswrapper[4688]: E1125 12:42:25.741712 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:42:27 crc kubenswrapper[4688]: I1125 12:42:27.054147 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-wctjb"] Nov 25 12:42:27 crc kubenswrapper[4688]: I1125 12:42:27.065636 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-wctjb"] Nov 25 12:42:28 crc kubenswrapper[4688]: I1125 12:42:28.765373 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4297ef88-82df-476f-90f6-e87b26dae1fd" path="/var/lib/kubelet/pods/4297ef88-82df-476f-90f6-e87b26dae1fd/volumes" Nov 25 12:42:29 crc kubenswrapper[4688]: I1125 12:42:29.846099 4688 scope.go:117] "RemoveContainer" containerID="3356c367703815ea150351dc0608ba763e9f09b02eab2e355ff043cf011ecaca" Nov 25 12:42:29 crc kubenswrapper[4688]: I1125 12:42:29.885100 4688 scope.go:117] "RemoveContainer" containerID="7e3445c248401cd21f1a77c6bc99c1f59a5e099500e4cfde7d123456e9ca859e" Nov 25 12:42:29 crc kubenswrapper[4688]: I1125 12:42:29.960004 4688 scope.go:117] "RemoveContainer" containerID="e7fc7d93e345f4c1714fed482ff9eadcafd4d0b0c2bef4511c1c69c973d7ff1c" Nov 25 12:42:30 crc kubenswrapper[4688]: I1125 12:42:30.001598 4688 scope.go:117] "RemoveContainer" containerID="f5ed01d2a641e9b680adf3b55c7b5fd30ec0001d963087225306d8b4bd512dd6" Nov 25 12:42:30 crc kubenswrapper[4688]: I1125 12:42:30.049203 4688 scope.go:117] "RemoveContainer" containerID="df4fd5256d892ee40d4f1c325961a17dbc3010bcafdb5844d6a5a551a53444ad" Nov 25 12:42:30 crc kubenswrapper[4688]: I1125 12:42:30.113784 4688 scope.go:117] "RemoveContainer" containerID="92f2f7d6b9f2d384530e30587ac26eb7f433bed79b9a1dcc7d3810fdf2c4d1eb" Nov 25 12:42:30 crc kubenswrapper[4688]: I1125 12:42:30.137605 4688 scope.go:117] "RemoveContainer" containerID="369380adb515d109980cacf6efc40c3c21d81bc8261b23ee644c8001f845b7b2" Nov 25 12:42:30 crc kubenswrapper[4688]: I1125 12:42:30.160937 4688 scope.go:117] "RemoveContainer" containerID="dddc26a9673cd0b63c6b252ccbcb05906ac2da7548c27fa4ca71a94e9cfd2466" Nov 25 12:42:30 crc kubenswrapper[4688]: I1125 12:42:30.182067 4688 scope.go:117] "RemoveContainer" containerID="232475f4d078877f621b428449899c525e085f7448a466f5001974157db1d00e" Nov 25 12:42:30 crc kubenswrapper[4688]: I1125 12:42:30.211606 4688 scope.go:117] "RemoveContainer" containerID="847eaa7616d949709a425dc2e5d264b7251af5c0fe2422123a7947060f74883e" Nov 25 12:42:30 crc kubenswrapper[4688]: I1125 12:42:30.232554 4688 scope.go:117] "RemoveContainer" containerID="b12644e475efc6dd8f036474e1045822faed67f268bf85acd69cfff1d5cdf6ef" Nov 25 12:42:31 crc kubenswrapper[4688]: I1125 12:42:31.054043 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8nw4p"] Nov 25 12:42:31 crc kubenswrapper[4688]: I1125 12:42:31.062594 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8nw4p"] Nov 25 12:42:32 crc kubenswrapper[4688]: I1125 12:42:32.751673 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0055b409-5571-400e-a4c1-46a58c368692" path="/var/lib/kubelet/pods/0055b409-5571-400e-a4c1-46a58c368692/volumes" Nov 25 12:42:33 crc kubenswrapper[4688]: I1125 12:42:33.030989 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-crb7s"] Nov 25 12:42:33 crc kubenswrapper[4688]: I1125 12:42:33.039400 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-crb7s"] Nov 25 12:42:34 crc kubenswrapper[4688]: I1125 12:42:34.752143 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05379dbe-faf8-4ac1-a032-40f31cb4e457" path="/var/lib/kubelet/pods/05379dbe-faf8-4ac1-a032-40f31cb4e457/volumes" Nov 25 12:42:37 crc kubenswrapper[4688]: I1125 12:42:37.740006 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:42:37 crc kubenswrapper[4688]: E1125 12:42:37.740583 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:42:39 crc kubenswrapper[4688]: I1125 12:42:39.044262 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rzvvs"] Nov 25 12:42:39 crc kubenswrapper[4688]: I1125 12:42:39.056424 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rzvvs"] Nov 25 12:42:40 crc kubenswrapper[4688]: I1125 12:42:40.754114 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b031476-5e95-46a9-8774-4073f647cb7a" path="/var/lib/kubelet/pods/5b031476-5e95-46a9-8774-4073f647cb7a/volumes" Nov 25 12:42:45 crc kubenswrapper[4688]: I1125 12:42:45.033843 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-tgslp"] Nov 25 12:42:45 crc kubenswrapper[4688]: I1125 12:42:45.043738 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-tgslp"] Nov 25 12:42:46 crc kubenswrapper[4688]: I1125 12:42:46.750796 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff6fe51c-f968-4dd0-93c2-b355ac6c27c7" path="/var/lib/kubelet/pods/ff6fe51c-f968-4dd0-93c2-b355ac6c27c7/volumes" Nov 25 12:42:50 crc kubenswrapper[4688]: I1125 12:42:50.753247 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:42:50 crc kubenswrapper[4688]: E1125 12:42:50.754672 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:42:51 crc kubenswrapper[4688]: I1125 12:42:51.035858 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-slb7z"] Nov 25 12:42:51 crc kubenswrapper[4688]: I1125 12:42:51.043053 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-slb7z"] Nov 25 12:42:52 crc kubenswrapper[4688]: I1125 12:42:52.749824 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588d841f-905c-42bb-9242-2e86b7e66877" path="/var/lib/kubelet/pods/588d841f-905c-42bb-9242-2e86b7e66877/volumes" Nov 25 12:43:03 crc kubenswrapper[4688]: I1125 12:43:03.740149 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:43:03 crc kubenswrapper[4688]: E1125 12:43:03.741245 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:43:15 crc kubenswrapper[4688]: I1125 12:43:15.739706 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:43:15 crc kubenswrapper[4688]: E1125 12:43:15.740490 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:43:30 crc kubenswrapper[4688]: I1125 12:43:30.464499 4688 scope.go:117] "RemoveContainer" containerID="dbf2bb8018c64875e0527dce58759f061f63b22162fe113d611571fd5be820bc" Nov 25 12:43:30 crc kubenswrapper[4688]: I1125 12:43:30.507081 4688 scope.go:117] "RemoveContainer" containerID="234e8cc0f04be0ad168f3a9effceca6cb04ea5b4326a6c9175811a574e0e0aab" Nov 25 12:43:30 crc kubenswrapper[4688]: I1125 12:43:30.559823 4688 scope.go:117] "RemoveContainer" containerID="24252ac5f580701179af2588ffd109dc96f0ac377e33a8cac235da15003ccdcb" Nov 25 12:43:30 crc kubenswrapper[4688]: I1125 12:43:30.611786 4688 scope.go:117] "RemoveContainer" containerID="f5fce7caed6d2ee0655eab95b4631fce3b28f32bba732ab8c03a01cebf6c7d79" Nov 25 12:43:30 crc kubenswrapper[4688]: I1125 12:43:30.639242 4688 scope.go:117] "RemoveContainer" containerID="8e6a890bb14409a3517aed1593aece4db6c1f97e8b1cd44b677e0908070d1c2e" Nov 25 12:43:30 crc kubenswrapper[4688]: I1125 12:43:30.745365 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:43:30 crc kubenswrapper[4688]: E1125 12:43:30.745687 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:43:31 crc kubenswrapper[4688]: I1125 12:43:31.190466 4688 generic.go:334] "Generic (PLEG): container finished" podID="824692a9-2ed3-41c1-a34d-52ae721df261" containerID="bc703e5d11e09caaed9eb33899d3eff4503d8da6143363a76a7e93c8d44c1fba" exitCode=0 Nov 25 12:43:31 crc kubenswrapper[4688]: I1125 12:43:31.190509 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" event={"ID":"824692a9-2ed3-41c1-a34d-52ae721df261","Type":"ContainerDied","Data":"bc703e5d11e09caaed9eb33899d3eff4503d8da6143363a76a7e93c8d44c1fba"} Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.620863 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.773260 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-ssh-key\") pod \"824692a9-2ed3-41c1-a34d-52ae721df261\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.773448 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-inventory\") pod \"824692a9-2ed3-41c1-a34d-52ae721df261\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.773498 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgjzl\" (UniqueName: \"kubernetes.io/projected/824692a9-2ed3-41c1-a34d-52ae721df261-kube-api-access-hgjzl\") pod \"824692a9-2ed3-41c1-a34d-52ae721df261\" (UID: \"824692a9-2ed3-41c1-a34d-52ae721df261\") " Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.781622 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/824692a9-2ed3-41c1-a34d-52ae721df261-kube-api-access-hgjzl" (OuterVolumeSpecName: "kube-api-access-hgjzl") pod "824692a9-2ed3-41c1-a34d-52ae721df261" (UID: "824692a9-2ed3-41c1-a34d-52ae721df261"). InnerVolumeSpecName "kube-api-access-hgjzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.803509 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-inventory" (OuterVolumeSpecName: "inventory") pod "824692a9-2ed3-41c1-a34d-52ae721df261" (UID: "824692a9-2ed3-41c1-a34d-52ae721df261"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.803984 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "824692a9-2ed3-41c1-a34d-52ae721df261" (UID: "824692a9-2ed3-41c1-a34d-52ae721df261"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.876044 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.876078 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/824692a9-2ed3-41c1-a34d-52ae721df261-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:43:32 crc kubenswrapper[4688]: I1125 12:43:32.876088 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgjzl\" (UniqueName: \"kubernetes.io/projected/824692a9-2ed3-41c1-a34d-52ae721df261-kube-api-access-hgjzl\") on node \"crc\" DevicePath \"\"" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.209072 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" event={"ID":"824692a9-2ed3-41c1-a34d-52ae721df261","Type":"ContainerDied","Data":"b1613c1ed67a4c835c1e11d21f824e640658854092c1cb7874d38946488e8773"} Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.209122 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1613c1ed67a4c835c1e11d21f824e640658854092c1cb7874d38946488e8773" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.209179 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.297448 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr"] Nov 25 12:43:33 crc kubenswrapper[4688]: E1125 12:43:33.298120 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824692a9-2ed3-41c1-a34d-52ae721df261" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.298151 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="824692a9-2ed3-41c1-a34d-52ae721df261" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.298367 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="824692a9-2ed3-41c1-a34d-52ae721df261" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.299256 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.301432 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.301537 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.301868 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.302013 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.310394 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr"] Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.385228 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.385330 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.385713 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzcjc\" (UniqueName: \"kubernetes.io/projected/1f744c38-f708-44f5-952a-419118bcade4-kube-api-access-tzcjc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.488007 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.488082 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.488197 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzcjc\" (UniqueName: \"kubernetes.io/projected/1f744c38-f708-44f5-952a-419118bcade4-kube-api-access-tzcjc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.491393 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.491671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.512710 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzcjc\" (UniqueName: \"kubernetes.io/projected/1f744c38-f708-44f5-952a-419118bcade4-kube-api-access-tzcjc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:33 crc kubenswrapper[4688]: I1125 12:43:33.618729 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:34 crc kubenswrapper[4688]: I1125 12:43:34.176422 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr"] Nov 25 12:43:34 crc kubenswrapper[4688]: I1125 12:43:34.220587 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" event={"ID":"1f744c38-f708-44f5-952a-419118bcade4","Type":"ContainerStarted","Data":"10d9674a508aedd95718d82eada94f8c606de10a8fcc17b45af5aac0a237f5db"} Nov 25 12:43:35 crc kubenswrapper[4688]: I1125 12:43:35.233772 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" event={"ID":"1f744c38-f708-44f5-952a-419118bcade4","Type":"ContainerStarted","Data":"bd830d0d06e0cf517d11f1c845c45c4f415f33b7089909a98ffe68324621de94"} Nov 25 12:43:35 crc kubenswrapper[4688]: I1125 12:43:35.252939 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" podStartSLOduration=1.759676195 podStartE2EDuration="2.252920094s" podCreationTimestamp="2025-11-25 12:43:33 +0000 UTC" firstStartedPulling="2025-11-25 12:43:34.178622746 +0000 UTC m=+1764.288251614" lastFinishedPulling="2025-11-25 12:43:34.671866605 +0000 UTC m=+1764.781495513" observedRunningTime="2025-11-25 12:43:35.248719701 +0000 UTC m=+1765.358348579" watchObservedRunningTime="2025-11-25 12:43:35.252920094 +0000 UTC m=+1765.362548962" Nov 25 12:43:39 crc kubenswrapper[4688]: I1125 12:43:39.035662 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bclxq"] Nov 25 12:43:39 crc kubenswrapper[4688]: I1125 12:43:39.050706 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5lgln"] Nov 25 12:43:39 crc kubenswrapper[4688]: I1125 12:43:39.058196 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-96cc-account-create-j8vlk"] Nov 25 12:43:39 crc kubenswrapper[4688]: I1125 12:43:39.067270 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-96cc-account-create-j8vlk"] Nov 25 12:43:39 crc kubenswrapper[4688]: I1125 12:43:39.078087 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5lgln"] Nov 25 12:43:39 crc kubenswrapper[4688]: I1125 12:43:39.088708 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bclxq"] Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.031018 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-43e9-account-create-pzpw6"] Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.041771 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hrqzl"] Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.052027 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0d93-account-create-7d8vt"] Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.061707 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hrqzl"] Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.071958 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-43e9-account-create-pzpw6"] Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.082441 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0d93-account-create-7d8vt"] Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.285421 4688 generic.go:334] "Generic (PLEG): container finished" podID="1f744c38-f708-44f5-952a-419118bcade4" containerID="bd830d0d06e0cf517d11f1c845c45c4f415f33b7089909a98ffe68324621de94" exitCode=0 Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.285497 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" event={"ID":"1f744c38-f708-44f5-952a-419118bcade4","Type":"ContainerDied","Data":"bd830d0d06e0cf517d11f1c845c45c4f415f33b7089909a98ffe68324621de94"} Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.755450 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dfbc1c4-4a11-470d-8fa3-d3b438268fce" path="/var/lib/kubelet/pods/5dfbc1c4-4a11-470d-8fa3-d3b438268fce/volumes" Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.756227 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f" path="/var/lib/kubelet/pods/5e63cbbe-8ddb-4b2c-9e71-e05b50398f9f/volumes" Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.756760 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616880c6-3a97-4e57-80ab-6ffc23eddb21" path="/var/lib/kubelet/pods/616880c6-3a97-4e57-80ab-6ffc23eddb21/volumes" Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.757379 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d08010-f05e-4554-9b4c-7acbf51553c3" path="/var/lib/kubelet/pods/b1d08010-f05e-4554-9b4c-7acbf51553c3/volumes" Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.758401 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7bb810b-ae07-4550-a2b1-4eba500f63ad" path="/var/lib/kubelet/pods/b7bb810b-ae07-4550-a2b1-4eba500f63ad/volumes" Nov 25 12:43:40 crc kubenswrapper[4688]: I1125 12:43:40.758932 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98f2c01-29ed-4744-8e2b-b62291565faf" path="/var/lib/kubelet/pods/c98f2c01-29ed-4744-8e2b-b62291565faf/volumes" Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.701595 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.843953 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzcjc\" (UniqueName: \"kubernetes.io/projected/1f744c38-f708-44f5-952a-419118bcade4-kube-api-access-tzcjc\") pod \"1f744c38-f708-44f5-952a-419118bcade4\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.844092 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-ssh-key\") pod \"1f744c38-f708-44f5-952a-419118bcade4\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.844163 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-inventory\") pod \"1f744c38-f708-44f5-952a-419118bcade4\" (UID: \"1f744c38-f708-44f5-952a-419118bcade4\") " Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.851463 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f744c38-f708-44f5-952a-419118bcade4-kube-api-access-tzcjc" (OuterVolumeSpecName: "kube-api-access-tzcjc") pod "1f744c38-f708-44f5-952a-419118bcade4" (UID: "1f744c38-f708-44f5-952a-419118bcade4"). InnerVolumeSpecName "kube-api-access-tzcjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.875762 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-inventory" (OuterVolumeSpecName: "inventory") pod "1f744c38-f708-44f5-952a-419118bcade4" (UID: "1f744c38-f708-44f5-952a-419118bcade4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.894189 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1f744c38-f708-44f5-952a-419118bcade4" (UID: "1f744c38-f708-44f5-952a-419118bcade4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.946087 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzcjc\" (UniqueName: \"kubernetes.io/projected/1f744c38-f708-44f5-952a-419118bcade4-kube-api-access-tzcjc\") on node \"crc\" DevicePath \"\"" Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.946123 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:43:41 crc kubenswrapper[4688]: I1125 12:43:41.946134 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f744c38-f708-44f5-952a-419118bcade4-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.305359 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" event={"ID":"1f744c38-f708-44f5-952a-419118bcade4","Type":"ContainerDied","Data":"10d9674a508aedd95718d82eada94f8c606de10a8fcc17b45af5aac0a237f5db"} Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.305405 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d9674a508aedd95718d82eada94f8c606de10a8fcc17b45af5aac0a237f5db" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.305501 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.388691 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl"] Nov 25 12:43:42 crc kubenswrapper[4688]: E1125 12:43:42.389511 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f744c38-f708-44f5-952a-419118bcade4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.389627 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f744c38-f708-44f5-952a-419118bcade4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.389875 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f744c38-f708-44f5-952a-419118bcade4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.390674 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.392926 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.393009 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.393133 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.393324 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.402538 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl"] Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.456207 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.456286 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jcbg\" (UniqueName: \"kubernetes.io/projected/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-kube-api-access-8jcbg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.456553 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.559318 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.559450 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jcbg\" (UniqueName: \"kubernetes.io/projected/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-kube-api-access-8jcbg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.559545 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.563390 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.563545 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.577179 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jcbg\" (UniqueName: \"kubernetes.io/projected/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-kube-api-access-8jcbg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8ljvl\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:42 crc kubenswrapper[4688]: I1125 12:43:42.720863 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:43:43 crc kubenswrapper[4688]: I1125 12:43:43.218391 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl"] Nov 25 12:43:43 crc kubenswrapper[4688]: I1125 12:43:43.315664 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" event={"ID":"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b","Type":"ContainerStarted","Data":"50566574d8e3ab11c051a7564c6bba481cdab83f3172b4742100bcf904c33def"} Nov 25 12:43:43 crc kubenswrapper[4688]: I1125 12:43:43.739451 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:43:43 crc kubenswrapper[4688]: E1125 12:43:43.739837 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:43:44 crc kubenswrapper[4688]: I1125 12:43:44.326670 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" event={"ID":"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b","Type":"ContainerStarted","Data":"ae841557838b3a611f43d11c38005046cce4fdb70d964598c7e785eebec40f23"} Nov 25 12:43:44 crc kubenswrapper[4688]: I1125 12:43:44.348865 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" podStartSLOduration=1.7243040189999999 podStartE2EDuration="2.348845862s" podCreationTimestamp="2025-11-25 12:43:42 +0000 UTC" firstStartedPulling="2025-11-25 12:43:43.227800372 +0000 UTC m=+1773.337429240" lastFinishedPulling="2025-11-25 12:43:43.852342215 +0000 UTC m=+1773.961971083" observedRunningTime="2025-11-25 12:43:44.347116526 +0000 UTC m=+1774.456745384" watchObservedRunningTime="2025-11-25 12:43:44.348845862 +0000 UTC m=+1774.458474730" Nov 25 12:43:58 crc kubenswrapper[4688]: I1125 12:43:58.739880 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:43:58 crc kubenswrapper[4688]: E1125 12:43:58.740894 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:44:07 crc kubenswrapper[4688]: I1125 12:44:07.050543 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdzdj"] Nov 25 12:44:07 crc kubenswrapper[4688]: I1125 12:44:07.061824 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wdzdj"] Nov 25 12:44:08 crc kubenswrapper[4688]: I1125 12:44:08.752745 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ae21eb-1699-4d0b-ba93-54b5d07e24ae" path="/var/lib/kubelet/pods/42ae21eb-1699-4d0b-ba93-54b5d07e24ae/volumes" Nov 25 12:44:12 crc kubenswrapper[4688]: I1125 12:44:12.740469 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:44:12 crc kubenswrapper[4688]: E1125 12:44:12.741180 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:44:21 crc kubenswrapper[4688]: I1125 12:44:21.661695 4688 generic.go:334] "Generic (PLEG): container finished" podID="a432e01c-b2a4-453c-b40b-d8fadf5a1b3b" containerID="ae841557838b3a611f43d11c38005046cce4fdb70d964598c7e785eebec40f23" exitCode=0 Nov 25 12:44:21 crc kubenswrapper[4688]: I1125 12:44:21.661980 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" event={"ID":"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b","Type":"ContainerDied","Data":"ae841557838b3a611f43d11c38005046cce4fdb70d964598c7e785eebec40f23"} Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.094369 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.231673 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-inventory\") pod \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.231723 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-ssh-key\") pod \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.231813 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jcbg\" (UniqueName: \"kubernetes.io/projected/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-kube-api-access-8jcbg\") pod \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\" (UID: \"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b\") " Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.238589 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-kube-api-access-8jcbg" (OuterVolumeSpecName: "kube-api-access-8jcbg") pod "a432e01c-b2a4-453c-b40b-d8fadf5a1b3b" (UID: "a432e01c-b2a4-453c-b40b-d8fadf5a1b3b"). InnerVolumeSpecName "kube-api-access-8jcbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.260560 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a432e01c-b2a4-453c-b40b-d8fadf5a1b3b" (UID: "a432e01c-b2a4-453c-b40b-d8fadf5a1b3b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.269346 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-inventory" (OuterVolumeSpecName: "inventory") pod "a432e01c-b2a4-453c-b40b-d8fadf5a1b3b" (UID: "a432e01c-b2a4-453c-b40b-d8fadf5a1b3b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.334092 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.334419 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.334570 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jcbg\" (UniqueName: \"kubernetes.io/projected/a432e01c-b2a4-453c-b40b-d8fadf5a1b3b-kube-api-access-8jcbg\") on node \"crc\" DevicePath \"\"" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.678943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" event={"ID":"a432e01c-b2a4-453c-b40b-d8fadf5a1b3b","Type":"ContainerDied","Data":"50566574d8e3ab11c051a7564c6bba481cdab83f3172b4742100bcf904c33def"} Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.678981 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8ljvl" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.678992 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50566574d8e3ab11c051a7564c6bba481cdab83f3172b4742100bcf904c33def" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.767546 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m"] Nov 25 12:44:23 crc kubenswrapper[4688]: E1125 12:44:23.767990 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a432e01c-b2a4-453c-b40b-d8fadf5a1b3b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.768013 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a432e01c-b2a4-453c-b40b-d8fadf5a1b3b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.768286 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a432e01c-b2a4-453c-b40b-d8fadf5a1b3b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.769091 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.771608 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.771884 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.772105 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.774964 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.793662 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m"] Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.946513 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67pvz\" (UniqueName: \"kubernetes.io/projected/0decfda5-2230-4d90-bc7c-f641bacb6117-kube-api-access-67pvz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.946969 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:23 crc kubenswrapper[4688]: I1125 12:44:23.947229 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.049504 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.049779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.049904 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67pvz\" (UniqueName: \"kubernetes.io/projected/0decfda5-2230-4d90-bc7c-f641bacb6117-kube-api-access-67pvz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.057679 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.061484 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.074923 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67pvz\" (UniqueName: \"kubernetes.io/projected/0decfda5-2230-4d90-bc7c-f641bacb6117-kube-api-access-67pvz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.096446 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.628161 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m"] Nov 25 12:44:24 crc kubenswrapper[4688]: I1125 12:44:24.687108 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" event={"ID":"0decfda5-2230-4d90-bc7c-f641bacb6117","Type":"ContainerStarted","Data":"fe08e3d1dc1db40ce1010879eff7cac4a6b4a7c1ff5dda339b18cb937ae546bd"} Nov 25 12:44:25 crc kubenswrapper[4688]: I1125 12:44:25.741731 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:44:25 crc kubenswrapper[4688]: E1125 12:44:25.742351 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:44:26 crc kubenswrapper[4688]: I1125 12:44:26.041918 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hjtmq"] Nov 25 12:44:26 crc kubenswrapper[4688]: I1125 12:44:26.052906 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vjwr8"] Nov 25 12:44:26 crc kubenswrapper[4688]: I1125 12:44:26.063202 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hjtmq"] Nov 25 12:44:26 crc kubenswrapper[4688]: I1125 12:44:26.070799 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vjwr8"] Nov 25 12:44:26 crc kubenswrapper[4688]: I1125 12:44:26.708511 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" event={"ID":"0decfda5-2230-4d90-bc7c-f641bacb6117","Type":"ContainerStarted","Data":"2f2f2ac0aa238dd281c423da69364c717f80be797424d665ff19c8fffa9601af"} Nov 25 12:44:26 crc kubenswrapper[4688]: I1125 12:44:26.726638 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" podStartSLOduration=2.413026019 podStartE2EDuration="3.726599929s" podCreationTimestamp="2025-11-25 12:44:23 +0000 UTC" firstStartedPulling="2025-11-25 12:44:24.629118786 +0000 UTC m=+1814.738747694" lastFinishedPulling="2025-11-25 12:44:25.942692696 +0000 UTC m=+1816.052321604" observedRunningTime="2025-11-25 12:44:26.72330798 +0000 UTC m=+1816.832936848" watchObservedRunningTime="2025-11-25 12:44:26.726599929 +0000 UTC m=+1816.836228797" Nov 25 12:44:26 crc kubenswrapper[4688]: I1125 12:44:26.750060 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f89f99a-e930-449e-8508-0c15309f5b8b" path="/var/lib/kubelet/pods/2f89f99a-e930-449e-8508-0c15309f5b8b/volumes" Nov 25 12:44:26 crc kubenswrapper[4688]: I1125 12:44:26.750733 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e137d392-3d34-468e-8a68-ed64665b2200" path="/var/lib/kubelet/pods/e137d392-3d34-468e-8a68-ed64665b2200/volumes" Nov 25 12:44:30 crc kubenswrapper[4688]: I1125 12:44:30.782309 4688 scope.go:117] "RemoveContainer" containerID="2494fb7005a764a03ef899080074d3a893f8baf4c55b965f3c396b3846d691c3" Nov 25 12:44:30 crc kubenswrapper[4688]: I1125 12:44:30.836234 4688 scope.go:117] "RemoveContainer" containerID="1e2cd5746d5093bf6ca86dceaf489b21505180932e19318b5d6a385fd0289f23" Nov 25 12:44:30 crc kubenswrapper[4688]: I1125 12:44:30.870717 4688 scope.go:117] "RemoveContainer" containerID="e2b7a68fae6cbf42304ad003ba486647b80e904684be1b30af805e8b93a0a7d4" Nov 25 12:44:30 crc kubenswrapper[4688]: I1125 12:44:30.917161 4688 scope.go:117] "RemoveContainer" containerID="0ed8192feb617d2d347dbb63dcb0ca175825d27b888224c14777467f70808d56" Nov 25 12:44:30 crc kubenswrapper[4688]: I1125 12:44:30.974077 4688 scope.go:117] "RemoveContainer" containerID="f5e3d624a85011ec7e2c59e9ba06c08c5d17be8dd9c2e29db7d8cb487699bf72" Nov 25 12:44:31 crc kubenswrapper[4688]: I1125 12:44:31.041693 4688 scope.go:117] "RemoveContainer" containerID="41de2305dfae4a0598e00f6ed165d47417ca6038cd1e6ed8a962109f476518df" Nov 25 12:44:31 crc kubenswrapper[4688]: I1125 12:44:31.075494 4688 scope.go:117] "RemoveContainer" containerID="e1ff6fac8b2b13e667d472ab5af5fd20d26bcfd86ba9c910d5d818c96b72f747" Nov 25 12:44:31 crc kubenswrapper[4688]: I1125 12:44:31.095729 4688 scope.go:117] "RemoveContainer" containerID="e1ca3f7b7267f335b219bfc1c8037cc32dce896e764c9bc47a849cf40e31288e" Nov 25 12:44:31 crc kubenswrapper[4688]: I1125 12:44:31.123016 4688 scope.go:117] "RemoveContainer" containerID="4d16272065b827d7357ad6efad232e90ceff1ea628372978f864a8b0e7968a62" Nov 25 12:44:36 crc kubenswrapper[4688]: I1125 12:44:36.740998 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:44:36 crc kubenswrapper[4688]: E1125 12:44:36.741877 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:44:47 crc kubenswrapper[4688]: I1125 12:44:47.740783 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:44:47 crc kubenswrapper[4688]: E1125 12:44:47.741917 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.159594 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w"] Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.172645 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w"] Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.172745 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.175194 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.176845 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.262307 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6910360d-8cc4-4472-898d-c08254b8575d-secret-volume\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.262770 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j47s5\" (UniqueName: \"kubernetes.io/projected/6910360d-8cc4-4472-898d-c08254b8575d-kube-api-access-j47s5\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.263003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6910360d-8cc4-4472-898d-c08254b8575d-config-volume\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.364903 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6910360d-8cc4-4472-898d-c08254b8575d-secret-volume\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.364965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j47s5\" (UniqueName: \"kubernetes.io/projected/6910360d-8cc4-4472-898d-c08254b8575d-kube-api-access-j47s5\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.365089 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6910360d-8cc4-4472-898d-c08254b8575d-config-volume\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.366221 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6910360d-8cc4-4472-898d-c08254b8575d-config-volume\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.371452 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6910360d-8cc4-4472-898d-c08254b8575d-secret-volume\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.388377 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j47s5\" (UniqueName: \"kubernetes.io/projected/6910360d-8cc4-4472-898d-c08254b8575d-kube-api-access-j47s5\") pod \"collect-profiles-29401245-tgl9w\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.495913 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.749691 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:45:00 crc kubenswrapper[4688]: E1125 12:45:00.750284 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:45:00 crc kubenswrapper[4688]: I1125 12:45:00.982809 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w"] Nov 25 12:45:01 crc kubenswrapper[4688]: I1125 12:45:01.041246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" event={"ID":"6910360d-8cc4-4472-898d-c08254b8575d","Type":"ContainerStarted","Data":"733084ddab49f673dbb227e221bfab1bbdeb5a0e8bb32bee4563f86a0959a9e5"} Nov 25 12:45:02 crc kubenswrapper[4688]: I1125 12:45:02.059857 4688 generic.go:334] "Generic (PLEG): container finished" podID="6910360d-8cc4-4472-898d-c08254b8575d" containerID="75cbe4d64ee6b3dbec0e0a58bc653d92b0a2375d49d0405448b50e1f5eb87e72" exitCode=0 Nov 25 12:45:02 crc kubenswrapper[4688]: I1125 12:45:02.060823 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" event={"ID":"6910360d-8cc4-4472-898d-c08254b8575d","Type":"ContainerDied","Data":"75cbe4d64ee6b3dbec0e0a58bc653d92b0a2375d49d0405448b50e1f5eb87e72"} Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.428710 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.520809 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j47s5\" (UniqueName: \"kubernetes.io/projected/6910360d-8cc4-4472-898d-c08254b8575d-kube-api-access-j47s5\") pod \"6910360d-8cc4-4472-898d-c08254b8575d\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.521046 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6910360d-8cc4-4472-898d-c08254b8575d-secret-volume\") pod \"6910360d-8cc4-4472-898d-c08254b8575d\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.521098 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6910360d-8cc4-4472-898d-c08254b8575d-config-volume\") pod \"6910360d-8cc4-4472-898d-c08254b8575d\" (UID: \"6910360d-8cc4-4472-898d-c08254b8575d\") " Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.521698 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6910360d-8cc4-4472-898d-c08254b8575d-config-volume" (OuterVolumeSpecName: "config-volume") pod "6910360d-8cc4-4472-898d-c08254b8575d" (UID: "6910360d-8cc4-4472-898d-c08254b8575d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.526748 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6910360d-8cc4-4472-898d-c08254b8575d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6910360d-8cc4-4472-898d-c08254b8575d" (UID: "6910360d-8cc4-4472-898d-c08254b8575d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.530096 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6910360d-8cc4-4472-898d-c08254b8575d-kube-api-access-j47s5" (OuterVolumeSpecName: "kube-api-access-j47s5") pod "6910360d-8cc4-4472-898d-c08254b8575d" (UID: "6910360d-8cc4-4472-898d-c08254b8575d"). InnerVolumeSpecName "kube-api-access-j47s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.622639 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6910360d-8cc4-4472-898d-c08254b8575d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.622668 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6910360d-8cc4-4472-898d-c08254b8575d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:03 crc kubenswrapper[4688]: I1125 12:45:03.622677 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j47s5\" (UniqueName: \"kubernetes.io/projected/6910360d-8cc4-4472-898d-c08254b8575d-kube-api-access-j47s5\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:04 crc kubenswrapper[4688]: I1125 12:45:04.086951 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" event={"ID":"6910360d-8cc4-4472-898d-c08254b8575d","Type":"ContainerDied","Data":"733084ddab49f673dbb227e221bfab1bbdeb5a0e8bb32bee4563f86a0959a9e5"} Nov 25 12:45:04 crc kubenswrapper[4688]: I1125 12:45:04.087292 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="733084ddab49f673dbb227e221bfab1bbdeb5a0e8bb32bee4563f86a0959a9e5" Nov 25 12:45:04 crc kubenswrapper[4688]: I1125 12:45:04.086988 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w" Nov 25 12:45:10 crc kubenswrapper[4688]: I1125 12:45:10.036062 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-s8lfl"] Nov 25 12:45:10 crc kubenswrapper[4688]: I1125 12:45:10.047909 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-s8lfl"] Nov 25 12:45:10 crc kubenswrapper[4688]: I1125 12:45:10.780618 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563d1d64-50e6-4460-8365-5ead3bc46347" path="/var/lib/kubelet/pods/563d1d64-50e6-4460-8365-5ead3bc46347/volumes" Nov 25 12:45:11 crc kubenswrapper[4688]: I1125 12:45:11.740436 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:45:11 crc kubenswrapper[4688]: E1125 12:45:11.741297 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:45:17 crc kubenswrapper[4688]: I1125 12:45:17.204659 4688 generic.go:334] "Generic (PLEG): container finished" podID="0decfda5-2230-4d90-bc7c-f641bacb6117" containerID="2f2f2ac0aa238dd281c423da69364c717f80be797424d665ff19c8fffa9601af" exitCode=0 Nov 25 12:45:17 crc kubenswrapper[4688]: I1125 12:45:17.204774 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" event={"ID":"0decfda5-2230-4d90-bc7c-f641bacb6117","Type":"ContainerDied","Data":"2f2f2ac0aa238dd281c423da69364c717f80be797424d665ff19c8fffa9601af"} Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.734881 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.835512 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-ssh-key\") pod \"0decfda5-2230-4d90-bc7c-f641bacb6117\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.835640 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-inventory\") pod \"0decfda5-2230-4d90-bc7c-f641bacb6117\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.835665 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67pvz\" (UniqueName: \"kubernetes.io/projected/0decfda5-2230-4d90-bc7c-f641bacb6117-kube-api-access-67pvz\") pod \"0decfda5-2230-4d90-bc7c-f641bacb6117\" (UID: \"0decfda5-2230-4d90-bc7c-f641bacb6117\") " Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.844429 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0decfda5-2230-4d90-bc7c-f641bacb6117-kube-api-access-67pvz" (OuterVolumeSpecName: "kube-api-access-67pvz") pod "0decfda5-2230-4d90-bc7c-f641bacb6117" (UID: "0decfda5-2230-4d90-bc7c-f641bacb6117"). InnerVolumeSpecName "kube-api-access-67pvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.866413 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0decfda5-2230-4d90-bc7c-f641bacb6117" (UID: "0decfda5-2230-4d90-bc7c-f641bacb6117"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.871924 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-inventory" (OuterVolumeSpecName: "inventory") pod "0decfda5-2230-4d90-bc7c-f641bacb6117" (UID: "0decfda5-2230-4d90-bc7c-f641bacb6117"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.937863 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.937903 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67pvz\" (UniqueName: \"kubernetes.io/projected/0decfda5-2230-4d90-bc7c-f641bacb6117-kube-api-access-67pvz\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:18 crc kubenswrapper[4688]: I1125 12:45:18.937917 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0decfda5-2230-4d90-bc7c-f641bacb6117-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.223041 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" event={"ID":"0decfda5-2230-4d90-bc7c-f641bacb6117","Type":"ContainerDied","Data":"fe08e3d1dc1db40ce1010879eff7cac4a6b4a7c1ff5dda339b18cb937ae546bd"} Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.223089 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe08e3d1dc1db40ce1010879eff7cac4a6b4a7c1ff5dda339b18cb937ae546bd" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.223097 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.312463 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zk4hf"] Nov 25 12:45:19 crc kubenswrapper[4688]: E1125 12:45:19.313025 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6910360d-8cc4-4472-898d-c08254b8575d" containerName="collect-profiles" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.313049 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6910360d-8cc4-4472-898d-c08254b8575d" containerName="collect-profiles" Nov 25 12:45:19 crc kubenswrapper[4688]: E1125 12:45:19.313098 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0decfda5-2230-4d90-bc7c-f641bacb6117" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.313109 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0decfda5-2230-4d90-bc7c-f641bacb6117" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.313354 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0decfda5-2230-4d90-bc7c-f641bacb6117" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.313393 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6910360d-8cc4-4472-898d-c08254b8575d" containerName="collect-profiles" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.314164 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.319570 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.319576 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.319829 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.320054 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.324199 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zk4hf"] Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.359013 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.359074 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p758n\" (UniqueName: \"kubernetes.io/projected/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-kube-api-access-p758n\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.359252 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.461018 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.461088 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p758n\" (UniqueName: \"kubernetes.io/projected/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-kube-api-access-p758n\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.461171 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.468265 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.468316 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.485352 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p758n\" (UniqueName: \"kubernetes.io/projected/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-kube-api-access-p758n\") pod \"ssh-known-hosts-edpm-deployment-zk4hf\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:19 crc kubenswrapper[4688]: I1125 12:45:19.645678 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:20 crc kubenswrapper[4688]: I1125 12:45:20.190925 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zk4hf"] Nov 25 12:45:20 crc kubenswrapper[4688]: I1125 12:45:20.235066 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" event={"ID":"c3ed5d98-1ee9-4de0-9387-cb3082a348bd","Type":"ContainerStarted","Data":"7d25f9dcc814aa48795de8bd756f095d22716c526c1e310a84bece7691316c1b"} Nov 25 12:45:21 crc kubenswrapper[4688]: I1125 12:45:21.246338 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" event={"ID":"c3ed5d98-1ee9-4de0-9387-cb3082a348bd","Type":"ContainerStarted","Data":"df16c26712f0148f9ca538c68c8b23eee8258b7db5a8f92b17c0b461ad0de960"} Nov 25 12:45:21 crc kubenswrapper[4688]: I1125 12:45:21.261270 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" podStartSLOduration=1.809768558 podStartE2EDuration="2.261252642s" podCreationTimestamp="2025-11-25 12:45:19 +0000 UTC" firstStartedPulling="2025-11-25 12:45:20.190100211 +0000 UTC m=+1870.299729079" lastFinishedPulling="2025-11-25 12:45:20.641584295 +0000 UTC m=+1870.751213163" observedRunningTime="2025-11-25 12:45:21.260339397 +0000 UTC m=+1871.369968275" watchObservedRunningTime="2025-11-25 12:45:21.261252642 +0000 UTC m=+1871.370881510" Nov 25 12:45:25 crc kubenswrapper[4688]: I1125 12:45:25.739880 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:45:25 crc kubenswrapper[4688]: E1125 12:45:25.741359 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:45:28 crc kubenswrapper[4688]: I1125 12:45:28.312403 4688 generic.go:334] "Generic (PLEG): container finished" podID="c3ed5d98-1ee9-4de0-9387-cb3082a348bd" containerID="df16c26712f0148f9ca538c68c8b23eee8258b7db5a8f92b17c0b461ad0de960" exitCode=0 Nov 25 12:45:28 crc kubenswrapper[4688]: I1125 12:45:28.312502 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" event={"ID":"c3ed5d98-1ee9-4de0-9387-cb3082a348bd","Type":"ContainerDied","Data":"df16c26712f0148f9ca538c68c8b23eee8258b7db5a8f92b17c0b461ad0de960"} Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.788104 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.886986 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p758n\" (UniqueName: \"kubernetes.io/projected/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-kube-api-access-p758n\") pod \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.887070 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-inventory-0\") pod \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.887155 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-ssh-key-openstack-edpm-ipam\") pod \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\" (UID: \"c3ed5d98-1ee9-4de0-9387-cb3082a348bd\") " Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.892599 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-kube-api-access-p758n" (OuterVolumeSpecName: "kube-api-access-p758n") pod "c3ed5d98-1ee9-4de0-9387-cb3082a348bd" (UID: "c3ed5d98-1ee9-4de0-9387-cb3082a348bd"). InnerVolumeSpecName "kube-api-access-p758n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.919929 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c3ed5d98-1ee9-4de0-9387-cb3082a348bd" (UID: "c3ed5d98-1ee9-4de0-9387-cb3082a348bd"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.921027 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3ed5d98-1ee9-4de0-9387-cb3082a348bd" (UID: "c3ed5d98-1ee9-4de0-9387-cb3082a348bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.989963 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p758n\" (UniqueName: \"kubernetes.io/projected/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-kube-api-access-p758n\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.990004 4688 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:29 crc kubenswrapper[4688]: I1125 12:45:29.990017 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ed5d98-1ee9-4de0-9387-cb3082a348bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.335638 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" event={"ID":"c3ed5d98-1ee9-4de0-9387-cb3082a348bd","Type":"ContainerDied","Data":"7d25f9dcc814aa48795de8bd756f095d22716c526c1e310a84bece7691316c1b"} Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.335679 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zk4hf" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.335727 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d25f9dcc814aa48795de8bd756f095d22716c526c1e310a84bece7691316c1b" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.412309 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p"] Nov 25 12:45:30 crc kubenswrapper[4688]: E1125 12:45:30.412771 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ed5d98-1ee9-4de0-9387-cb3082a348bd" containerName="ssh-known-hosts-edpm-deployment" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.412786 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ed5d98-1ee9-4de0-9387-cb3082a348bd" containerName="ssh-known-hosts-edpm-deployment" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.413000 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ed5d98-1ee9-4de0-9387-cb3082a348bd" containerName="ssh-known-hosts-edpm-deployment" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.413733 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.417893 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.418011 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.418060 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.418858 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.424225 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p"] Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.498088 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.498373 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.498599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrl4f\" (UniqueName: \"kubernetes.io/projected/09188b82-0612-4538-b6ca-7517d7da935b-kube-api-access-xrl4f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.599996 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrl4f\" (UniqueName: \"kubernetes.io/projected/09188b82-0612-4538-b6ca-7517d7da935b-kube-api-access-xrl4f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.600046 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.600118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.605737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.617057 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.620047 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrl4f\" (UniqueName: \"kubernetes.io/projected/09188b82-0612-4538-b6ca-7517d7da935b-kube-api-access-xrl4f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-x8w6p\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:30 crc kubenswrapper[4688]: I1125 12:45:30.732005 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:31 crc kubenswrapper[4688]: I1125 12:45:31.277439 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p"] Nov 25 12:45:31 crc kubenswrapper[4688]: I1125 12:45:31.318569 4688 scope.go:117] "RemoveContainer" containerID="41855d143c71b4f2a185281bac712b7fc8df26c17c239fae0513d8901788e9f6" Nov 25 12:45:31 crc kubenswrapper[4688]: I1125 12:45:31.346000 4688 scope.go:117] "RemoveContainer" containerID="a0b7650af5ba7ffdbaed4131e6e1fe89976b4c18e3d4d0d4920edad38794a4da" Nov 25 12:45:31 crc kubenswrapper[4688]: I1125 12:45:31.349307 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" event={"ID":"09188b82-0612-4538-b6ca-7517d7da935b","Type":"ContainerStarted","Data":"48e30bbb275226b6d54712bc6c95fc7777126d5b91c5ec0700e5bf86289de41b"} Nov 25 12:45:31 crc kubenswrapper[4688]: I1125 12:45:31.418704 4688 scope.go:117] "RemoveContainer" containerID="3b322a896c7a0b65a6538914d0343cc49da7f4f3fa27225f93af20e30d089cd1" Nov 25 12:45:31 crc kubenswrapper[4688]: I1125 12:45:31.438196 4688 scope.go:117] "RemoveContainer" containerID="9d3344105243dc7a685c68cb3d1d997e12aa3733416d5ff7966ac5f8682f1e25" Nov 25 12:45:32 crc kubenswrapper[4688]: I1125 12:45:32.361910 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" event={"ID":"09188b82-0612-4538-b6ca-7517d7da935b","Type":"ContainerStarted","Data":"7333a48460a91afcf39493079fe16ddb7b7adcbcc6ab655cd84ccc47d97e4777"} Nov 25 12:45:32 crc kubenswrapper[4688]: I1125 12:45:32.382414 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" podStartSLOduration=1.952158352 podStartE2EDuration="2.382393757s" podCreationTimestamp="2025-11-25 12:45:30 +0000 UTC" firstStartedPulling="2025-11-25 12:45:31.284631774 +0000 UTC m=+1881.394260642" lastFinishedPulling="2025-11-25 12:45:31.714867169 +0000 UTC m=+1881.824496047" observedRunningTime="2025-11-25 12:45:32.379406767 +0000 UTC m=+1882.489035645" watchObservedRunningTime="2025-11-25 12:45:32.382393757 +0000 UTC m=+1882.492022625" Nov 25 12:45:38 crc kubenswrapper[4688]: I1125 12:45:38.739663 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:45:38 crc kubenswrapper[4688]: E1125 12:45:38.740376 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:45:40 crc kubenswrapper[4688]: I1125 12:45:40.448160 4688 generic.go:334] "Generic (PLEG): container finished" podID="09188b82-0612-4538-b6ca-7517d7da935b" containerID="7333a48460a91afcf39493079fe16ddb7b7adcbcc6ab655cd84ccc47d97e4777" exitCode=0 Nov 25 12:45:40 crc kubenswrapper[4688]: I1125 12:45:40.448258 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" event={"ID":"09188b82-0612-4538-b6ca-7517d7da935b","Type":"ContainerDied","Data":"7333a48460a91afcf39493079fe16ddb7b7adcbcc6ab655cd84ccc47d97e4777"} Nov 25 12:45:41 crc kubenswrapper[4688]: I1125 12:45:41.896441 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:41 crc kubenswrapper[4688]: I1125 12:45:41.933730 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrl4f\" (UniqueName: \"kubernetes.io/projected/09188b82-0612-4538-b6ca-7517d7da935b-kube-api-access-xrl4f\") pod \"09188b82-0612-4538-b6ca-7517d7da935b\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " Nov 25 12:45:41 crc kubenswrapper[4688]: I1125 12:45:41.933778 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-ssh-key\") pod \"09188b82-0612-4538-b6ca-7517d7da935b\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " Nov 25 12:45:41 crc kubenswrapper[4688]: I1125 12:45:41.934032 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-inventory\") pod \"09188b82-0612-4538-b6ca-7517d7da935b\" (UID: \"09188b82-0612-4538-b6ca-7517d7da935b\") " Nov 25 12:45:41 crc kubenswrapper[4688]: I1125 12:45:41.939770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09188b82-0612-4538-b6ca-7517d7da935b-kube-api-access-xrl4f" (OuterVolumeSpecName: "kube-api-access-xrl4f") pod "09188b82-0612-4538-b6ca-7517d7da935b" (UID: "09188b82-0612-4538-b6ca-7517d7da935b"). InnerVolumeSpecName "kube-api-access-xrl4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:45:41 crc kubenswrapper[4688]: I1125 12:45:41.971844 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-inventory" (OuterVolumeSpecName: "inventory") pod "09188b82-0612-4538-b6ca-7517d7da935b" (UID: "09188b82-0612-4538-b6ca-7517d7da935b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:41 crc kubenswrapper[4688]: I1125 12:45:41.971956 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "09188b82-0612-4538-b6ca-7517d7da935b" (UID: "09188b82-0612-4538-b6ca-7517d7da935b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.036646 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.036677 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrl4f\" (UniqueName: \"kubernetes.io/projected/09188b82-0612-4538-b6ca-7517d7da935b-kube-api-access-xrl4f\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.036686 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09188b82-0612-4538-b6ca-7517d7da935b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.468393 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" event={"ID":"09188b82-0612-4538-b6ca-7517d7da935b","Type":"ContainerDied","Data":"48e30bbb275226b6d54712bc6c95fc7777126d5b91c5ec0700e5bf86289de41b"} Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.468436 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e30bbb275226b6d54712bc6c95fc7777126d5b91c5ec0700e5bf86289de41b" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.468495 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-x8w6p" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.547558 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k"] Nov 25 12:45:42 crc kubenswrapper[4688]: E1125 12:45:42.548769 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09188b82-0612-4538-b6ca-7517d7da935b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.548812 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="09188b82-0612-4538-b6ca-7517d7da935b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.549246 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="09188b82-0612-4538-b6ca-7517d7da935b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.550700 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.552787 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.553143 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.553561 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.555104 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.565762 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k"] Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.647284 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmdj\" (UniqueName: \"kubernetes.io/projected/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-kube-api-access-fpmdj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.647713 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.647848 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.750028 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.750086 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.750167 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpmdj\" (UniqueName: \"kubernetes.io/projected/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-kube-api-access-fpmdj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.753766 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.753904 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.765786 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpmdj\" (UniqueName: \"kubernetes.io/projected/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-kube-api-access-fpmdj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:42 crc kubenswrapper[4688]: I1125 12:45:42.915582 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:43 crc kubenswrapper[4688]: I1125 12:45:43.455078 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k"] Nov 25 12:45:43 crc kubenswrapper[4688]: W1125 12:45:43.456677 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9463536c_fd6c_4aee_b3a9_c7f20996f5c7.slice/crio-d70aaecd1ba52b08cc2699ab3687dcd8ddd8753ef5947d1ac94cf0b299e9b312 WatchSource:0}: Error finding container d70aaecd1ba52b08cc2699ab3687dcd8ddd8753ef5947d1ac94cf0b299e9b312: Status 404 returned error can't find the container with id d70aaecd1ba52b08cc2699ab3687dcd8ddd8753ef5947d1ac94cf0b299e9b312 Nov 25 12:45:43 crc kubenswrapper[4688]: I1125 12:45:43.478722 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" event={"ID":"9463536c-fd6c-4aee-b3a9-c7f20996f5c7","Type":"ContainerStarted","Data":"d70aaecd1ba52b08cc2699ab3687dcd8ddd8753ef5947d1ac94cf0b299e9b312"} Nov 25 12:45:44 crc kubenswrapper[4688]: I1125 12:45:44.491963 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" event={"ID":"9463536c-fd6c-4aee-b3a9-c7f20996f5c7","Type":"ContainerStarted","Data":"afc742cf3278988c8ebf3bdea98aa74c13ddbd66f146082e21dd61e72864f873"} Nov 25 12:45:44 crc kubenswrapper[4688]: I1125 12:45:44.517813 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" podStartSLOduration=1.820755631 podStartE2EDuration="2.517786939s" podCreationTimestamp="2025-11-25 12:45:42 +0000 UTC" firstStartedPulling="2025-11-25 12:45:43.459303417 +0000 UTC m=+1893.568932285" lastFinishedPulling="2025-11-25 12:45:44.156334725 +0000 UTC m=+1894.265963593" observedRunningTime="2025-11-25 12:45:44.511186342 +0000 UTC m=+1894.620815210" watchObservedRunningTime="2025-11-25 12:45:44.517786939 +0000 UTC m=+1894.627415807" Nov 25 12:45:50 crc kubenswrapper[4688]: I1125 12:45:50.759587 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:45:50 crc kubenswrapper[4688]: E1125 12:45:50.760423 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:45:54 crc kubenswrapper[4688]: I1125 12:45:54.584945 4688 generic.go:334] "Generic (PLEG): container finished" podID="9463536c-fd6c-4aee-b3a9-c7f20996f5c7" containerID="afc742cf3278988c8ebf3bdea98aa74c13ddbd66f146082e21dd61e72864f873" exitCode=0 Nov 25 12:45:54 crc kubenswrapper[4688]: I1125 12:45:54.585043 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" event={"ID":"9463536c-fd6c-4aee-b3a9-c7f20996f5c7","Type":"ContainerDied","Data":"afc742cf3278988c8ebf3bdea98aa74c13ddbd66f146082e21dd61e72864f873"} Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.038003 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.124364 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpmdj\" (UniqueName: \"kubernetes.io/projected/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-kube-api-access-fpmdj\") pod \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.124541 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-ssh-key\") pod \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.124600 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-inventory\") pod \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\" (UID: \"9463536c-fd6c-4aee-b3a9-c7f20996f5c7\") " Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.130059 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-kube-api-access-fpmdj" (OuterVolumeSpecName: "kube-api-access-fpmdj") pod "9463536c-fd6c-4aee-b3a9-c7f20996f5c7" (UID: "9463536c-fd6c-4aee-b3a9-c7f20996f5c7"). InnerVolumeSpecName "kube-api-access-fpmdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.155227 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9463536c-fd6c-4aee-b3a9-c7f20996f5c7" (UID: "9463536c-fd6c-4aee-b3a9-c7f20996f5c7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.159219 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-inventory" (OuterVolumeSpecName: "inventory") pod "9463536c-fd6c-4aee-b3a9-c7f20996f5c7" (UID: "9463536c-fd6c-4aee-b3a9-c7f20996f5c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.227471 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpmdj\" (UniqueName: \"kubernetes.io/projected/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-kube-api-access-fpmdj\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.227555 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.227576 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9463536c-fd6c-4aee-b3a9-c7f20996f5c7-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.611165 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" event={"ID":"9463536c-fd6c-4aee-b3a9-c7f20996f5c7","Type":"ContainerDied","Data":"d70aaecd1ba52b08cc2699ab3687dcd8ddd8753ef5947d1ac94cf0b299e9b312"} Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.611199 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.611207 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d70aaecd1ba52b08cc2699ab3687dcd8ddd8753ef5947d1ac94cf0b299e9b312" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.704323 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7"] Nov 25 12:45:56 crc kubenswrapper[4688]: E1125 12:45:56.704847 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9463536c-fd6c-4aee-b3a9-c7f20996f5c7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.704876 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9463536c-fd6c-4aee-b3a9-c7f20996f5c7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.705139 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9463536c-fd6c-4aee-b3a9-c7f20996f5c7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.705979 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.708588 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.708744 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.709345 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.709432 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.709646 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.709676 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.709763 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.709961 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.734464 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7"] Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739051 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739088 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739149 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739166 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739185 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739215 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739237 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739253 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgctp\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-kube-api-access-rgctp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739288 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739354 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739428 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739457 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.739492 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.840869 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.841880 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.841965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.842055 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.842181 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.842265 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.842352 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.842459 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.842568 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.843254 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgctp\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-kube-api-access-rgctp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.843545 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.843680 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.844394 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.844438 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.848280 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.848449 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.849916 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.848541 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.848770 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.849556 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.849716 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.848457 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.854919 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.855566 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.856168 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.857479 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.859067 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:56 crc kubenswrapper[4688]: I1125 12:45:56.868383 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgctp\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-kube-api-access-rgctp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:57 crc kubenswrapper[4688]: I1125 12:45:57.037032 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:45:57 crc kubenswrapper[4688]: I1125 12:45:57.575486 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7"] Nov 25 12:45:57 crc kubenswrapper[4688]: I1125 12:45:57.626071 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" event={"ID":"f5155f9f-2994-43db-9adc-665613ab1711","Type":"ContainerStarted","Data":"43319b47f8b9f23407f9b16b2523772e75f15ad445d52dcd01c79af4c002e228"} Nov 25 12:45:58 crc kubenswrapper[4688]: I1125 12:45:58.635818 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" event={"ID":"f5155f9f-2994-43db-9adc-665613ab1711","Type":"ContainerStarted","Data":"64fe740465df5949899d93d0731456693541645d7ba06450231c3930a547ea49"} Nov 25 12:45:58 crc kubenswrapper[4688]: I1125 12:45:58.661720 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" podStartSLOduration=2.199509762 podStartE2EDuration="2.661698933s" podCreationTimestamp="2025-11-25 12:45:56 +0000 UTC" firstStartedPulling="2025-11-25 12:45:57.581216572 +0000 UTC m=+1907.690845440" lastFinishedPulling="2025-11-25 12:45:58.043405743 +0000 UTC m=+1908.153034611" observedRunningTime="2025-11-25 12:45:58.652751213 +0000 UTC m=+1908.762380081" watchObservedRunningTime="2025-11-25 12:45:58.661698933 +0000 UTC m=+1908.771327801" Nov 25 12:46:02 crc kubenswrapper[4688]: I1125 12:46:02.744704 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:46:02 crc kubenswrapper[4688]: E1125 12:46:02.745606 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:46:14 crc kubenswrapper[4688]: I1125 12:46:14.740978 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:46:14 crc kubenswrapper[4688]: E1125 12:46:14.742140 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:46:25 crc kubenswrapper[4688]: I1125 12:46:25.739776 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:46:25 crc kubenswrapper[4688]: E1125 12:46:25.741667 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:46:37 crc kubenswrapper[4688]: I1125 12:46:37.017352 4688 generic.go:334] "Generic (PLEG): container finished" podID="f5155f9f-2994-43db-9adc-665613ab1711" containerID="64fe740465df5949899d93d0731456693541645d7ba06450231c3930a547ea49" exitCode=0 Nov 25 12:46:37 crc kubenswrapper[4688]: I1125 12:46:37.017443 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" event={"ID":"f5155f9f-2994-43db-9adc-665613ab1711","Type":"ContainerDied","Data":"64fe740465df5949899d93d0731456693541645d7ba06450231c3930a547ea49"} Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.459650 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576076 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-libvirt-combined-ca-bundle\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576381 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-bootstrap-combined-ca-bundle\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576415 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-repo-setup-combined-ca-bundle\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576461 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-neutron-metadata-combined-ca-bundle\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576485 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-nova-combined-ca-bundle\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576562 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-inventory\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576584 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ovn-combined-ca-bundle\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576602 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ssh-key\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576667 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576700 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576737 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-telemetry-combined-ca-bundle\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576799 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576888 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.576909 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgctp\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-kube-api-access-rgctp\") pod \"f5155f9f-2994-43db-9adc-665613ab1711\" (UID: \"f5155f9f-2994-43db-9adc-665613ab1711\") " Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.587095 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589417 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589501 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589538 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589553 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589604 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589672 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589683 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589441 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589735 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-kube-api-access-rgctp" (OuterVolumeSpecName: "kube-api-access-rgctp") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "kube-api-access-rgctp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.589921 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.596889 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.616049 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-inventory" (OuterVolumeSpecName: "inventory") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.622767 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f5155f9f-2994-43db-9adc-665613ab1711" (UID: "f5155f9f-2994-43db-9adc-665613ab1711"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679069 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679100 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgctp\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-kube-api-access-rgctp\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679110 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679119 4688 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679129 4688 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679138 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679151 4688 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679160 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679168 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679176 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679187 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679196 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679205 4688 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5155f9f-2994-43db-9adc-665613ab1711-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:38 crc kubenswrapper[4688]: I1125 12:46:38.679214 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5155f9f-2994-43db-9adc-665613ab1711-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.038867 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" event={"ID":"f5155f9f-2994-43db-9adc-665613ab1711","Type":"ContainerDied","Data":"43319b47f8b9f23407f9b16b2523772e75f15ad445d52dcd01c79af4c002e228"} Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.038912 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.038940 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43319b47f8b9f23407f9b16b2523772e75f15ad445d52dcd01c79af4c002e228" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.199781 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q"] Nov 25 12:46:39 crc kubenswrapper[4688]: E1125 12:46:39.200404 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5155f9f-2994-43db-9adc-665613ab1711" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.200421 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5155f9f-2994-43db-9adc-665613ab1711" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.200663 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5155f9f-2994-43db-9adc-665613ab1711" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.201325 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.203685 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.203990 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.204104 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.205116 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.205853 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.223307 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q"] Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.294449 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.294628 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.294712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrf7\" (UniqueName: \"kubernetes.io/projected/ec1f886e-aff6-4077-a770-5bf03fe54bc9-kube-api-access-pmrf7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.295013 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.295155 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.397625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.397730 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.397816 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmrf7\" (UniqueName: \"kubernetes.io/projected/ec1f886e-aff6-4077-a770-5bf03fe54bc9-kube-api-access-pmrf7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.397938 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.398014 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.399418 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.401954 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.402222 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.404796 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.429878 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmrf7\" (UniqueName: \"kubernetes.io/projected/ec1f886e-aff6-4077-a770-5bf03fe54bc9-kube-api-access-pmrf7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dgk5q\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:39 crc kubenswrapper[4688]: I1125 12:46:39.524206 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:46:40 crc kubenswrapper[4688]: I1125 12:46:40.082413 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q"] Nov 25 12:46:40 crc kubenswrapper[4688]: I1125 12:46:40.755177 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:46:40 crc kubenswrapper[4688]: E1125 12:46:40.757320 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:46:41 crc kubenswrapper[4688]: I1125 12:46:41.061455 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" event={"ID":"ec1f886e-aff6-4077-a770-5bf03fe54bc9","Type":"ContainerStarted","Data":"dfb662fc52fde168088c7a891c1e0061f058ba2733d664cd30f9d5205f076e3b"} Nov 25 12:46:41 crc kubenswrapper[4688]: I1125 12:46:41.061498 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" event={"ID":"ec1f886e-aff6-4077-a770-5bf03fe54bc9","Type":"ContainerStarted","Data":"f501dcb8efe0126767c678ac6de9fe7b7665a9cca56be5a6f2c8380ede106c3c"} Nov 25 12:46:41 crc kubenswrapper[4688]: I1125 12:46:41.107595 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" podStartSLOduration=1.50786524 podStartE2EDuration="2.107565172s" podCreationTimestamp="2025-11-25 12:46:39 +0000 UTC" firstStartedPulling="2025-11-25 12:46:40.082035933 +0000 UTC m=+1950.191664801" lastFinishedPulling="2025-11-25 12:46:40.681735855 +0000 UTC m=+1950.791364733" observedRunningTime="2025-11-25 12:46:41.09328185 +0000 UTC m=+1951.202910728" watchObservedRunningTime="2025-11-25 12:46:41.107565172 +0000 UTC m=+1951.217194080" Nov 25 12:46:52 crc kubenswrapper[4688]: I1125 12:46:52.740442 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:46:53 crc kubenswrapper[4688]: I1125 12:46:53.179949 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"7e3cc73dc5c4be98ecb3f652f0f2f2650cf0ec554f71fedeb4b4d865be1081b7"} Nov 25 12:47:43 crc kubenswrapper[4688]: I1125 12:47:43.684159 4688 generic.go:334] "Generic (PLEG): container finished" podID="ec1f886e-aff6-4077-a770-5bf03fe54bc9" containerID="dfb662fc52fde168088c7a891c1e0061f058ba2733d664cd30f9d5205f076e3b" exitCode=0 Nov 25 12:47:43 crc kubenswrapper[4688]: I1125 12:47:43.684823 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" event={"ID":"ec1f886e-aff6-4077-a770-5bf03fe54bc9","Type":"ContainerDied","Data":"dfb662fc52fde168088c7a891c1e0061f058ba2733d664cd30f9d5205f076e3b"} Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.217042 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.277916 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-inventory\") pod \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.278050 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmrf7\" (UniqueName: \"kubernetes.io/projected/ec1f886e-aff6-4077-a770-5bf03fe54bc9-kube-api-access-pmrf7\") pod \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.278105 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovn-combined-ca-bundle\") pod \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.278335 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovncontroller-config-0\") pod \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.278381 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ssh-key\") pod \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\" (UID: \"ec1f886e-aff6-4077-a770-5bf03fe54bc9\") " Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.284305 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1f886e-aff6-4077-a770-5bf03fe54bc9-kube-api-access-pmrf7" (OuterVolumeSpecName: "kube-api-access-pmrf7") pod "ec1f886e-aff6-4077-a770-5bf03fe54bc9" (UID: "ec1f886e-aff6-4077-a770-5bf03fe54bc9"). InnerVolumeSpecName "kube-api-access-pmrf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.285034 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ec1f886e-aff6-4077-a770-5bf03fe54bc9" (UID: "ec1f886e-aff6-4077-a770-5bf03fe54bc9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.312245 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-inventory" (OuterVolumeSpecName: "inventory") pod "ec1f886e-aff6-4077-a770-5bf03fe54bc9" (UID: "ec1f886e-aff6-4077-a770-5bf03fe54bc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.315763 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ec1f886e-aff6-4077-a770-5bf03fe54bc9" (UID: "ec1f886e-aff6-4077-a770-5bf03fe54bc9"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.315890 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ec1f886e-aff6-4077-a770-5bf03fe54bc9" (UID: "ec1f886e-aff6-4077-a770-5bf03fe54bc9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.380995 4688 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.381041 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.381051 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.381059 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmrf7\" (UniqueName: \"kubernetes.io/projected/ec1f886e-aff6-4077-a770-5bf03fe54bc9-kube-api-access-pmrf7\") on node \"crc\" DevicePath \"\"" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.381068 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1f886e-aff6-4077-a770-5bf03fe54bc9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.707659 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" event={"ID":"ec1f886e-aff6-4077-a770-5bf03fe54bc9","Type":"ContainerDied","Data":"f501dcb8efe0126767c678ac6de9fe7b7665a9cca56be5a6f2c8380ede106c3c"} Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.708004 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f501dcb8efe0126767c678ac6de9fe7b7665a9cca56be5a6f2c8380ede106c3c" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.707721 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dgk5q" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.811373 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6"] Nov 25 12:47:45 crc kubenswrapper[4688]: E1125 12:47:45.811916 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1f886e-aff6-4077-a770-5bf03fe54bc9" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.811939 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1f886e-aff6-4077-a770-5bf03fe54bc9" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.812217 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1f886e-aff6-4077-a770-5bf03fe54bc9" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.813035 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.815022 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.815215 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.815240 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.815377 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.816066 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.816338 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.824382 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6"] Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.890135 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.890426 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.890532 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.890663 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssfjp\" (UniqueName: \"kubernetes.io/projected/7a8f5458-7f30-4fd1-963f-b3619c7f506f-kube-api-access-ssfjp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.890772 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.890868 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.993150 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssfjp\" (UniqueName: \"kubernetes.io/projected/7a8f5458-7f30-4fd1-963f-b3619c7f506f-kube-api-access-ssfjp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.993232 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.993261 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.993349 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.993383 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.993406 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.998650 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.998838 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:45 crc kubenswrapper[4688]: I1125 12:47:45.999806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:46 crc kubenswrapper[4688]: I1125 12:47:46.000014 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:46 crc kubenswrapper[4688]: I1125 12:47:46.000570 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:46 crc kubenswrapper[4688]: I1125 12:47:46.012128 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssfjp\" (UniqueName: \"kubernetes.io/projected/7a8f5458-7f30-4fd1-963f-b3619c7f506f-kube-api-access-ssfjp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:46 crc kubenswrapper[4688]: I1125 12:47:46.131319 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:47:46 crc kubenswrapper[4688]: I1125 12:47:46.703545 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6"] Nov 25 12:47:46 crc kubenswrapper[4688]: I1125 12:47:46.708972 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:47:46 crc kubenswrapper[4688]: I1125 12:47:46.717937 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" event={"ID":"7a8f5458-7f30-4fd1-963f-b3619c7f506f","Type":"ContainerStarted","Data":"c049984d3b83ece5ef5db2fbf471f53195ee8b0fef215b9621888e1d99988c75"} Nov 25 12:47:47 crc kubenswrapper[4688]: I1125 12:47:47.731413 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" event={"ID":"7a8f5458-7f30-4fd1-963f-b3619c7f506f","Type":"ContainerStarted","Data":"8b06bb05dca1640f4869eff5d0d1edf760171bbf41744cc6da9fb9570ee7d831"} Nov 25 12:47:47 crc kubenswrapper[4688]: I1125 12:47:47.749866 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" podStartSLOduration=2.294872377 podStartE2EDuration="2.749847305s" podCreationTimestamp="2025-11-25 12:47:45 +0000 UTC" firstStartedPulling="2025-11-25 12:47:46.708502352 +0000 UTC m=+2016.818131240" lastFinishedPulling="2025-11-25 12:47:47.1634773 +0000 UTC m=+2017.273106168" observedRunningTime="2025-11-25 12:47:47.747727028 +0000 UTC m=+2017.857355916" watchObservedRunningTime="2025-11-25 12:47:47.749847305 +0000 UTC m=+2017.859476173" Nov 25 12:48:35 crc kubenswrapper[4688]: I1125 12:48:35.219879 4688 generic.go:334] "Generic (PLEG): container finished" podID="7a8f5458-7f30-4fd1-963f-b3619c7f506f" containerID="8b06bb05dca1640f4869eff5d0d1edf760171bbf41744cc6da9fb9570ee7d831" exitCode=0 Nov 25 12:48:35 crc kubenswrapper[4688]: I1125 12:48:35.219990 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" event={"ID":"7a8f5458-7f30-4fd1-963f-b3619c7f506f","Type":"ContainerDied","Data":"8b06bb05dca1640f4869eff5d0d1edf760171bbf41744cc6da9fb9570ee7d831"} Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.641079 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.760441 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-ssh-key\") pod \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.760886 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssfjp\" (UniqueName: \"kubernetes.io/projected/7a8f5458-7f30-4fd1-963f-b3619c7f506f-kube-api-access-ssfjp\") pod \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.760983 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-nova-metadata-neutron-config-0\") pod \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.761072 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.761185 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-inventory\") pod \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.761978 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-metadata-combined-ca-bundle\") pod \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\" (UID: \"7a8f5458-7f30-4fd1-963f-b3619c7f506f\") " Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.767567 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7a8f5458-7f30-4fd1-963f-b3619c7f506f" (UID: "7a8f5458-7f30-4fd1-963f-b3619c7f506f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.767859 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8f5458-7f30-4fd1-963f-b3619c7f506f-kube-api-access-ssfjp" (OuterVolumeSpecName: "kube-api-access-ssfjp") pod "7a8f5458-7f30-4fd1-963f-b3619c7f506f" (UID: "7a8f5458-7f30-4fd1-963f-b3619c7f506f"). InnerVolumeSpecName "kube-api-access-ssfjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.793629 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7a8f5458-7f30-4fd1-963f-b3619c7f506f" (UID: "7a8f5458-7f30-4fd1-963f-b3619c7f506f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.795185 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7a8f5458-7f30-4fd1-963f-b3619c7f506f" (UID: "7a8f5458-7f30-4fd1-963f-b3619c7f506f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.796591 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-inventory" (OuterVolumeSpecName: "inventory") pod "7a8f5458-7f30-4fd1-963f-b3619c7f506f" (UID: "7a8f5458-7f30-4fd1-963f-b3619c7f506f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.798303 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7a8f5458-7f30-4fd1-963f-b3619c7f506f" (UID: "7a8f5458-7f30-4fd1-963f-b3619c7f506f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.863979 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.864013 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssfjp\" (UniqueName: \"kubernetes.io/projected/7a8f5458-7f30-4fd1-963f-b3619c7f506f-kube-api-access-ssfjp\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.864027 4688 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.864039 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.864052 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:36 crc kubenswrapper[4688]: I1125 12:48:36.864063 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8f5458-7f30-4fd1-963f-b3619c7f506f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.243898 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" event={"ID":"7a8f5458-7f30-4fd1-963f-b3619c7f506f","Type":"ContainerDied","Data":"c049984d3b83ece5ef5db2fbf471f53195ee8b0fef215b9621888e1d99988c75"} Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.244264 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c049984d3b83ece5ef5db2fbf471f53195ee8b0fef215b9621888e1d99988c75" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.243958 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.405369 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9"] Nov 25 12:48:37 crc kubenswrapper[4688]: E1125 12:48:37.405798 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8f5458-7f30-4fd1-963f-b3619c7f506f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.405816 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8f5458-7f30-4fd1-963f-b3619c7f506f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.406073 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8f5458-7f30-4fd1-963f-b3619c7f506f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.406720 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.408999 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.409395 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.409574 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.409772 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.422044 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9"] Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.422686 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.475005 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.475108 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.475273 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.475308 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzzjm\" (UniqueName: \"kubernetes.io/projected/e884e12f-21a9-42e8-815e-78c0108842d8-kube-api-access-qzzjm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.475385 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.577437 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.577594 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.577636 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.577676 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.577708 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzzjm\" (UniqueName: \"kubernetes.io/projected/e884e12f-21a9-42e8-815e-78c0108842d8-kube-api-access-qzzjm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.581970 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.582144 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.582406 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.585469 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.607637 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzzjm\" (UniqueName: \"kubernetes.io/projected/e884e12f-21a9-42e8-815e-78c0108842d8-kube-api-access-qzzjm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:37 crc kubenswrapper[4688]: I1125 12:48:37.727267 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:48:38 crc kubenswrapper[4688]: I1125 12:48:38.316396 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9"] Nov 25 12:48:39 crc kubenswrapper[4688]: I1125 12:48:39.275907 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" event={"ID":"e884e12f-21a9-42e8-815e-78c0108842d8","Type":"ContainerStarted","Data":"fd99475b6101f6422e2c4836ea3a1243e8ecf9659df8230ac6183492edce59ac"} Nov 25 12:48:39 crc kubenswrapper[4688]: I1125 12:48:39.276807 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" event={"ID":"e884e12f-21a9-42e8-815e-78c0108842d8","Type":"ContainerStarted","Data":"77e23b236e575b03e75acd34c81793a7c9a0daa97e54cf8429cf85c1cdf78221"} Nov 25 12:48:39 crc kubenswrapper[4688]: I1125 12:48:39.306436 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" podStartSLOduration=1.861469079 podStartE2EDuration="2.306417088s" podCreationTimestamp="2025-11-25 12:48:37 +0000 UTC" firstStartedPulling="2025-11-25 12:48:38.323846778 +0000 UTC m=+2068.433475646" lastFinishedPulling="2025-11-25 12:48:38.768794787 +0000 UTC m=+2068.878423655" observedRunningTime="2025-11-25 12:48:39.298893017 +0000 UTC m=+2069.408521885" watchObservedRunningTime="2025-11-25 12:48:39.306417088 +0000 UTC m=+2069.416045956" Nov 25 12:49:17 crc kubenswrapper[4688]: I1125 12:49:17.853506 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:49:17 crc kubenswrapper[4688]: I1125 12:49:17.854113 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.234916 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6bmzf"] Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.238932 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.268557 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bmzf"] Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.317406 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2f6\" (UniqueName: \"kubernetes.io/projected/47def538-702a-4f97-8e9c-1cc1f10f323e-kube-api-access-zh2f6\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.317653 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-utilities\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.318028 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-catalog-content\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.419835 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh2f6\" (UniqueName: \"kubernetes.io/projected/47def538-702a-4f97-8e9c-1cc1f10f323e-kube-api-access-zh2f6\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.419962 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-utilities\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.420136 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-catalog-content\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.420735 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-utilities\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.420791 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-catalog-content\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.453221 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh2f6\" (UniqueName: \"kubernetes.io/projected/47def538-702a-4f97-8e9c-1cc1f10f323e-kube-api-access-zh2f6\") pod \"certified-operators-6bmzf\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:34 crc kubenswrapper[4688]: I1125 12:49:34.570619 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:35 crc kubenswrapper[4688]: I1125 12:49:35.145861 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bmzf"] Nov 25 12:49:35 crc kubenswrapper[4688]: I1125 12:49:35.913718 4688 generic.go:334] "Generic (PLEG): container finished" podID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerID="d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c" exitCode=0 Nov 25 12:49:35 crc kubenswrapper[4688]: I1125 12:49:35.913797 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bmzf" event={"ID":"47def538-702a-4f97-8e9c-1cc1f10f323e","Type":"ContainerDied","Data":"d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c"} Nov 25 12:49:35 crc kubenswrapper[4688]: I1125 12:49:35.914005 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bmzf" event={"ID":"47def538-702a-4f97-8e9c-1cc1f10f323e","Type":"ContainerStarted","Data":"fffdd77db6c4ea9bd5aaa6b8664ecfdc9f5cbbe233475b1f3f86fae31df90e53"} Nov 25 12:49:37 crc kubenswrapper[4688]: I1125 12:49:37.934386 4688 generic.go:334] "Generic (PLEG): container finished" podID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerID="e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229" exitCode=0 Nov 25 12:49:37 crc kubenswrapper[4688]: I1125 12:49:37.934466 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bmzf" event={"ID":"47def538-702a-4f97-8e9c-1cc1f10f323e","Type":"ContainerDied","Data":"e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229"} Nov 25 12:49:38 crc kubenswrapper[4688]: I1125 12:49:38.951422 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bmzf" event={"ID":"47def538-702a-4f97-8e9c-1cc1f10f323e","Type":"ContainerStarted","Data":"a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd"} Nov 25 12:49:38 crc kubenswrapper[4688]: I1125 12:49:38.998935 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6bmzf" podStartSLOduration=2.567933007 podStartE2EDuration="4.99885038s" podCreationTimestamp="2025-11-25 12:49:34 +0000 UTC" firstStartedPulling="2025-11-25 12:49:35.915507806 +0000 UTC m=+2126.025136674" lastFinishedPulling="2025-11-25 12:49:38.346425139 +0000 UTC m=+2128.456054047" observedRunningTime="2025-11-25 12:49:38.974690183 +0000 UTC m=+2129.084319061" watchObservedRunningTime="2025-11-25 12:49:38.99885038 +0000 UTC m=+2129.108479298" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.221610 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2pb5d"] Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.223692 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.232866 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2pb5d"] Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.325585 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-catalog-content\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.325666 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk9f7\" (UniqueName: \"kubernetes.io/projected/bf37c24b-1ef1-4d00-b76f-0970de9393a4-kube-api-access-lk9f7\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.325759 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-utilities\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.426907 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-utilities\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.426987 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-catalog-content\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.427058 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk9f7\" (UniqueName: \"kubernetes.io/projected/bf37c24b-1ef1-4d00-b76f-0970de9393a4-kube-api-access-lk9f7\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.427434 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-utilities\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.427471 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-catalog-content\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.463731 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk9f7\" (UniqueName: \"kubernetes.io/projected/bf37c24b-1ef1-4d00-b76f-0970de9393a4-kube-api-access-lk9f7\") pod \"community-operators-2pb5d\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:40 crc kubenswrapper[4688]: I1125 12:49:40.546002 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:41 crc kubenswrapper[4688]: I1125 12:49:41.080613 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2pb5d"] Nov 25 12:49:41 crc kubenswrapper[4688]: W1125 12:49:41.083655 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf37c24b_1ef1_4d00_b76f_0970de9393a4.slice/crio-09dcbd94f9684fcb73a33582cc233c00515a2ec0347b58a62045f7c1a617665e WatchSource:0}: Error finding container 09dcbd94f9684fcb73a33582cc233c00515a2ec0347b58a62045f7c1a617665e: Status 404 returned error can't find the container with id 09dcbd94f9684fcb73a33582cc233c00515a2ec0347b58a62045f7c1a617665e Nov 25 12:49:41 crc kubenswrapper[4688]: I1125 12:49:41.983741 4688 generic.go:334] "Generic (PLEG): container finished" podID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerID="f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d" exitCode=0 Nov 25 12:49:41 crc kubenswrapper[4688]: I1125 12:49:41.983863 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pb5d" event={"ID":"bf37c24b-1ef1-4d00-b76f-0970de9393a4","Type":"ContainerDied","Data":"f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d"} Nov 25 12:49:41 crc kubenswrapper[4688]: I1125 12:49:41.984077 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pb5d" event={"ID":"bf37c24b-1ef1-4d00-b76f-0970de9393a4","Type":"ContainerStarted","Data":"09dcbd94f9684fcb73a33582cc233c00515a2ec0347b58a62045f7c1a617665e"} Nov 25 12:49:43 crc kubenswrapper[4688]: I1125 12:49:43.005445 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pb5d" event={"ID":"bf37c24b-1ef1-4d00-b76f-0970de9393a4","Type":"ContainerStarted","Data":"12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd"} Nov 25 12:49:44 crc kubenswrapper[4688]: I1125 12:49:44.017970 4688 generic.go:334] "Generic (PLEG): container finished" podID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerID="12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd" exitCode=0 Nov 25 12:49:44 crc kubenswrapper[4688]: I1125 12:49:44.018389 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pb5d" event={"ID":"bf37c24b-1ef1-4d00-b76f-0970de9393a4","Type":"ContainerDied","Data":"12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd"} Nov 25 12:49:44 crc kubenswrapper[4688]: I1125 12:49:44.571093 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:44 crc kubenswrapper[4688]: I1125 12:49:44.571372 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:44 crc kubenswrapper[4688]: I1125 12:49:44.641946 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:45 crc kubenswrapper[4688]: I1125 12:49:45.030175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pb5d" event={"ID":"bf37c24b-1ef1-4d00-b76f-0970de9393a4","Type":"ContainerStarted","Data":"50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057"} Nov 25 12:49:45 crc kubenswrapper[4688]: I1125 12:49:45.054512 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2pb5d" podStartSLOduration=2.582297301 podStartE2EDuration="5.054490966s" podCreationTimestamp="2025-11-25 12:49:40 +0000 UTC" firstStartedPulling="2025-11-25 12:49:41.986361227 +0000 UTC m=+2132.095990105" lastFinishedPulling="2025-11-25 12:49:44.458554902 +0000 UTC m=+2134.568183770" observedRunningTime="2025-11-25 12:49:45.049683711 +0000 UTC m=+2135.159312579" watchObservedRunningTime="2025-11-25 12:49:45.054490966 +0000 UTC m=+2135.164119834" Nov 25 12:49:45 crc kubenswrapper[4688]: I1125 12:49:45.087321 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.004255 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bmzf"] Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.046803 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6bmzf" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerName="registry-server" containerID="cri-o://a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd" gracePeriod=2 Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.591685 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.661860 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh2f6\" (UniqueName: \"kubernetes.io/projected/47def538-702a-4f97-8e9c-1cc1f10f323e-kube-api-access-zh2f6\") pod \"47def538-702a-4f97-8e9c-1cc1f10f323e\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.661994 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-utilities\") pod \"47def538-702a-4f97-8e9c-1cc1f10f323e\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.662105 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-catalog-content\") pod \"47def538-702a-4f97-8e9c-1cc1f10f323e\" (UID: \"47def538-702a-4f97-8e9c-1cc1f10f323e\") " Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.663105 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-utilities" (OuterVolumeSpecName: "utilities") pod "47def538-702a-4f97-8e9c-1cc1f10f323e" (UID: "47def538-702a-4f97-8e9c-1cc1f10f323e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.663463 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.668272 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47def538-702a-4f97-8e9c-1cc1f10f323e-kube-api-access-zh2f6" (OuterVolumeSpecName: "kube-api-access-zh2f6") pod "47def538-702a-4f97-8e9c-1cc1f10f323e" (UID: "47def538-702a-4f97-8e9c-1cc1f10f323e"). InnerVolumeSpecName "kube-api-access-zh2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.765467 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh2f6\" (UniqueName: \"kubernetes.io/projected/47def538-702a-4f97-8e9c-1cc1f10f323e-kube-api-access-zh2f6\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.854216 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:49:47 crc kubenswrapper[4688]: I1125 12:49:47.854268 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.058337 4688 generic.go:334] "Generic (PLEG): container finished" podID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerID="a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd" exitCode=0 Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.058386 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bmzf" event={"ID":"47def538-702a-4f97-8e9c-1cc1f10f323e","Type":"ContainerDied","Data":"a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd"} Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.058417 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bmzf" event={"ID":"47def538-702a-4f97-8e9c-1cc1f10f323e","Type":"ContainerDied","Data":"fffdd77db6c4ea9bd5aaa6b8664ecfdc9f5cbbe233475b1f3f86fae31df90e53"} Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.058438 4688 scope.go:117] "RemoveContainer" containerID="a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.058458 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bmzf" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.080303 4688 scope.go:117] "RemoveContainer" containerID="e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.110575 4688 scope.go:117] "RemoveContainer" containerID="d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.163992 4688 scope.go:117] "RemoveContainer" containerID="a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd" Nov 25 12:49:48 crc kubenswrapper[4688]: E1125 12:49:48.166480 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd\": container with ID starting with a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd not found: ID does not exist" containerID="a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.166533 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd"} err="failed to get container status \"a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd\": rpc error: code = NotFound desc = could not find container \"a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd\": container with ID starting with a35698f24b2c9a1011e38676296da63f77178c6b3cafc6c6598141be673f0abd not found: ID does not exist" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.166558 4688 scope.go:117] "RemoveContainer" containerID="e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229" Nov 25 12:49:48 crc kubenswrapper[4688]: E1125 12:49:48.168797 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229\": container with ID starting with e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229 not found: ID does not exist" containerID="e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.168842 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229"} err="failed to get container status \"e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229\": rpc error: code = NotFound desc = could not find container \"e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229\": container with ID starting with e475220e991309919c93c40dfff43b76d7a48344cf2449a2d0169fbdbb60a229 not found: ID does not exist" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.168872 4688 scope.go:117] "RemoveContainer" containerID="d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c" Nov 25 12:49:48 crc kubenswrapper[4688]: E1125 12:49:48.169209 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c\": container with ID starting with d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c not found: ID does not exist" containerID="d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.169247 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c"} err="failed to get container status \"d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c\": rpc error: code = NotFound desc = could not find container \"d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c\": container with ID starting with d88b164addfc8adcda68ca1d1a77a0d08e489fc9d3443c78accba0338db3f09c not found: ID does not exist" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.408014 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47def538-702a-4f97-8e9c-1cc1f10f323e" (UID: "47def538-702a-4f97-8e9c-1cc1f10f323e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.477565 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47def538-702a-4f97-8e9c-1cc1f10f323e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.696790 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bmzf"] Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.706145 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6bmzf"] Nov 25 12:49:48 crc kubenswrapper[4688]: I1125 12:49:48.760028 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" path="/var/lib/kubelet/pods/47def538-702a-4f97-8e9c-1cc1f10f323e/volumes" Nov 25 12:49:50 crc kubenswrapper[4688]: I1125 12:49:50.547480 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:50 crc kubenswrapper[4688]: I1125 12:49:50.547587 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:50 crc kubenswrapper[4688]: I1125 12:49:50.622572 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:51 crc kubenswrapper[4688]: I1125 12:49:51.139471 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:52 crc kubenswrapper[4688]: I1125 12:49:52.004290 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2pb5d"] Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.102427 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2pb5d" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerName="registry-server" containerID="cri-o://50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057" gracePeriod=2 Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.646047 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.669025 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-catalog-content\") pod \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.669091 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-utilities\") pod \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.670845 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-utilities" (OuterVolumeSpecName: "utilities") pod "bf37c24b-1ef1-4d00-b76f-0970de9393a4" (UID: "bf37c24b-1ef1-4d00-b76f-0970de9393a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.725478 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf37c24b-1ef1-4d00-b76f-0970de9393a4" (UID: "bf37c24b-1ef1-4d00-b76f-0970de9393a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.770759 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk9f7\" (UniqueName: \"kubernetes.io/projected/bf37c24b-1ef1-4d00-b76f-0970de9393a4-kube-api-access-lk9f7\") pod \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\" (UID: \"bf37c24b-1ef1-4d00-b76f-0970de9393a4\") " Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.772113 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.772148 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf37c24b-1ef1-4d00-b76f-0970de9393a4-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.776033 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf37c24b-1ef1-4d00-b76f-0970de9393a4-kube-api-access-lk9f7" (OuterVolumeSpecName: "kube-api-access-lk9f7") pod "bf37c24b-1ef1-4d00-b76f-0970de9393a4" (UID: "bf37c24b-1ef1-4d00-b76f-0970de9393a4"). InnerVolumeSpecName "kube-api-access-lk9f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:49:53 crc kubenswrapper[4688]: I1125 12:49:53.873822 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk9f7\" (UniqueName: \"kubernetes.io/projected/bf37c24b-1ef1-4d00-b76f-0970de9393a4-kube-api-access-lk9f7\") on node \"crc\" DevicePath \"\"" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.129143 4688 generic.go:334] "Generic (PLEG): container finished" podID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerID="50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057" exitCode=0 Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.129185 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pb5d" event={"ID":"bf37c24b-1ef1-4d00-b76f-0970de9393a4","Type":"ContainerDied","Data":"50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057"} Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.129193 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pb5d" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.129209 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pb5d" event={"ID":"bf37c24b-1ef1-4d00-b76f-0970de9393a4","Type":"ContainerDied","Data":"09dcbd94f9684fcb73a33582cc233c00515a2ec0347b58a62045f7c1a617665e"} Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.129225 4688 scope.go:117] "RemoveContainer" containerID="50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.160515 4688 scope.go:117] "RemoveContainer" containerID="12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.165134 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2pb5d"] Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.178008 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2pb5d"] Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.196553 4688 scope.go:117] "RemoveContainer" containerID="f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.224411 4688 scope.go:117] "RemoveContainer" containerID="50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057" Nov 25 12:49:54 crc kubenswrapper[4688]: E1125 12:49:54.224862 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057\": container with ID starting with 50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057 not found: ID does not exist" containerID="50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.224899 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057"} err="failed to get container status \"50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057\": rpc error: code = NotFound desc = could not find container \"50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057\": container with ID starting with 50a35197df8f70d80282b47243e5b3f8a32c669eadfd177265d5d16e2b7a7057 not found: ID does not exist" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.224925 4688 scope.go:117] "RemoveContainer" containerID="12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd" Nov 25 12:49:54 crc kubenswrapper[4688]: E1125 12:49:54.225196 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd\": container with ID starting with 12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd not found: ID does not exist" containerID="12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.225236 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd"} err="failed to get container status \"12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd\": rpc error: code = NotFound desc = could not find container \"12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd\": container with ID starting with 12da3918a2ab42ee7fb9b2314011ef5ef6c63a5479a5caa2952459ec7e38dbdd not found: ID does not exist" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.225266 4688 scope.go:117] "RemoveContainer" containerID="f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d" Nov 25 12:49:54 crc kubenswrapper[4688]: E1125 12:49:54.225710 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d\": container with ID starting with f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d not found: ID does not exist" containerID="f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.225739 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d"} err="failed to get container status \"f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d\": rpc error: code = NotFound desc = could not find container \"f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d\": container with ID starting with f2c968442d4f831dceb1ab38f067ac4b76ab854e47ee726a2793bf29a579db8d not found: ID does not exist" Nov 25 12:49:54 crc kubenswrapper[4688]: I1125 12:49:54.751495 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" path="/var/lib/kubelet/pods/bf37c24b-1ef1-4d00-b76f-0970de9393a4/volumes" Nov 25 12:50:17 crc kubenswrapper[4688]: I1125 12:50:17.854006 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:50:17 crc kubenswrapper[4688]: I1125 12:50:17.854573 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:50:17 crc kubenswrapper[4688]: I1125 12:50:17.854623 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:50:17 crc kubenswrapper[4688]: I1125 12:50:17.855353 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e3cc73dc5c4be98ecb3f652f0f2f2650cf0ec554f71fedeb4b4d865be1081b7"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:50:17 crc kubenswrapper[4688]: I1125 12:50:17.855406 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://7e3cc73dc5c4be98ecb3f652f0f2f2650cf0ec554f71fedeb4b4d865be1081b7" gracePeriod=600 Nov 25 12:50:18 crc kubenswrapper[4688]: I1125 12:50:18.357954 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="7e3cc73dc5c4be98ecb3f652f0f2f2650cf0ec554f71fedeb4b4d865be1081b7" exitCode=0 Nov 25 12:50:18 crc kubenswrapper[4688]: I1125 12:50:18.358034 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"7e3cc73dc5c4be98ecb3f652f0f2f2650cf0ec554f71fedeb4b4d865be1081b7"} Nov 25 12:50:18 crc kubenswrapper[4688]: I1125 12:50:18.358399 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6"} Nov 25 12:50:18 crc kubenswrapper[4688]: I1125 12:50:18.358419 4688 scope.go:117] "RemoveContainer" containerID="3274b9d370aa8968be24ad414eb668bfb9187a8124f5a3e1016fee521ad661fc" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.733239 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h975x"] Nov 25 12:51:00 crc kubenswrapper[4688]: E1125 12:51:00.741341 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerName="extract-utilities" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.741398 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerName="extract-utilities" Nov 25 12:51:00 crc kubenswrapper[4688]: E1125 12:51:00.741471 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerName="registry-server" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.741484 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerName="registry-server" Nov 25 12:51:00 crc kubenswrapper[4688]: E1125 12:51:00.741602 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerName="registry-server" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.741620 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerName="registry-server" Nov 25 12:51:00 crc kubenswrapper[4688]: E1125 12:51:00.741646 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerName="extract-content" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.741658 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerName="extract-content" Nov 25 12:51:00 crc kubenswrapper[4688]: E1125 12:51:00.741692 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerName="extract-content" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.741705 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerName="extract-content" Nov 25 12:51:00 crc kubenswrapper[4688]: E1125 12:51:00.741742 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerName="extract-utilities" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.741764 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerName="extract-utilities" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.742556 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf37c24b-1ef1-4d00-b76f-0970de9393a4" containerName="registry-server" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.742648 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="47def538-702a-4f97-8e9c-1cc1f10f323e" containerName="registry-server" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.755169 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.794397 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h975x"] Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.880078 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n578x\" (UniqueName: \"kubernetes.io/projected/0890b099-15d0-4cd3-a05a-e1b542a42a22-kube-api-access-n578x\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.880184 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-utilities\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.880228 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-catalog-content\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.982494 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-utilities\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.982637 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-catalog-content\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.982833 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n578x\" (UniqueName: \"kubernetes.io/projected/0890b099-15d0-4cd3-a05a-e1b542a42a22-kube-api-access-n578x\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.983233 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-utilities\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:00 crc kubenswrapper[4688]: I1125 12:51:00.983434 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-catalog-content\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:01 crc kubenswrapper[4688]: I1125 12:51:01.007486 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n578x\" (UniqueName: \"kubernetes.io/projected/0890b099-15d0-4cd3-a05a-e1b542a42a22-kube-api-access-n578x\") pod \"redhat-operators-h975x\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:01 crc kubenswrapper[4688]: I1125 12:51:01.105125 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:01 crc kubenswrapper[4688]: I1125 12:51:01.573968 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h975x"] Nov 25 12:51:01 crc kubenswrapper[4688]: I1125 12:51:01.804755 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h975x" event={"ID":"0890b099-15d0-4cd3-a05a-e1b542a42a22","Type":"ContainerStarted","Data":"fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27"} Nov 25 12:51:01 crc kubenswrapper[4688]: I1125 12:51:01.805157 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h975x" event={"ID":"0890b099-15d0-4cd3-a05a-e1b542a42a22","Type":"ContainerStarted","Data":"9a24856831391a0cf00bd09c221d396f0e97c6b17e3ac5a38105a3ffa5ab9427"} Nov 25 12:51:02 crc kubenswrapper[4688]: I1125 12:51:02.822480 4688 generic.go:334] "Generic (PLEG): container finished" podID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerID="fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27" exitCode=0 Nov 25 12:51:02 crc kubenswrapper[4688]: I1125 12:51:02.822560 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h975x" event={"ID":"0890b099-15d0-4cd3-a05a-e1b542a42a22","Type":"ContainerDied","Data":"fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27"} Nov 25 12:51:04 crc kubenswrapper[4688]: I1125 12:51:04.866592 4688 generic.go:334] "Generic (PLEG): container finished" podID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerID="43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2" exitCode=0 Nov 25 12:51:04 crc kubenswrapper[4688]: I1125 12:51:04.866761 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h975x" event={"ID":"0890b099-15d0-4cd3-a05a-e1b542a42a22","Type":"ContainerDied","Data":"43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2"} Nov 25 12:51:05 crc kubenswrapper[4688]: I1125 12:51:05.879700 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h975x" event={"ID":"0890b099-15d0-4cd3-a05a-e1b542a42a22","Type":"ContainerStarted","Data":"9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83"} Nov 25 12:51:05 crc kubenswrapper[4688]: I1125 12:51:05.899925 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h975x" podStartSLOduration=3.266918405 podStartE2EDuration="5.899903126s" podCreationTimestamp="2025-11-25 12:51:00 +0000 UTC" firstStartedPulling="2025-11-25 12:51:02.824178797 +0000 UTC m=+2212.933807665" lastFinishedPulling="2025-11-25 12:51:05.457163518 +0000 UTC m=+2215.566792386" observedRunningTime="2025-11-25 12:51:05.897306626 +0000 UTC m=+2216.006935494" watchObservedRunningTime="2025-11-25 12:51:05.899903126 +0000 UTC m=+2216.009531994" Nov 25 12:51:11 crc kubenswrapper[4688]: I1125 12:51:11.105466 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:11 crc kubenswrapper[4688]: I1125 12:51:11.106776 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:11 crc kubenswrapper[4688]: I1125 12:51:11.168806 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:12 crc kubenswrapper[4688]: I1125 12:51:12.009426 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:12 crc kubenswrapper[4688]: I1125 12:51:12.071431 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h975x"] Nov 25 12:51:13 crc kubenswrapper[4688]: I1125 12:51:13.962940 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h975x" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerName="registry-server" containerID="cri-o://9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83" gracePeriod=2 Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.585602 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.764131 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n578x\" (UniqueName: \"kubernetes.io/projected/0890b099-15d0-4cd3-a05a-e1b542a42a22-kube-api-access-n578x\") pod \"0890b099-15d0-4cd3-a05a-e1b542a42a22\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.764204 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-catalog-content\") pod \"0890b099-15d0-4cd3-a05a-e1b542a42a22\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.764227 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-utilities\") pod \"0890b099-15d0-4cd3-a05a-e1b542a42a22\" (UID: \"0890b099-15d0-4cd3-a05a-e1b542a42a22\") " Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.765381 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-utilities" (OuterVolumeSpecName: "utilities") pod "0890b099-15d0-4cd3-a05a-e1b542a42a22" (UID: "0890b099-15d0-4cd3-a05a-e1b542a42a22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.770559 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0890b099-15d0-4cd3-a05a-e1b542a42a22-kube-api-access-n578x" (OuterVolumeSpecName: "kube-api-access-n578x") pod "0890b099-15d0-4cd3-a05a-e1b542a42a22" (UID: "0890b099-15d0-4cd3-a05a-e1b542a42a22"). InnerVolumeSpecName "kube-api-access-n578x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.866751 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n578x\" (UniqueName: \"kubernetes.io/projected/0890b099-15d0-4cd3-a05a-e1b542a42a22-kube-api-access-n578x\") on node \"crc\" DevicePath \"\"" Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.866791 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.973868 4688 generic.go:334] "Generic (PLEG): container finished" podID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerID="9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83" exitCode=0 Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.973903 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h975x" Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.973922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h975x" event={"ID":"0890b099-15d0-4cd3-a05a-e1b542a42a22","Type":"ContainerDied","Data":"9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83"} Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.975392 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h975x" event={"ID":"0890b099-15d0-4cd3-a05a-e1b542a42a22","Type":"ContainerDied","Data":"9a24856831391a0cf00bd09c221d396f0e97c6b17e3ac5a38105a3ffa5ab9427"} Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.975427 4688 scope.go:117] "RemoveContainer" containerID="9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83" Nov 25 12:51:14 crc kubenswrapper[4688]: I1125 12:51:14.997053 4688 scope.go:117] "RemoveContainer" containerID="43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.019863 4688 scope.go:117] "RemoveContainer" containerID="fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.066200 4688 scope.go:117] "RemoveContainer" containerID="9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83" Nov 25 12:51:15 crc kubenswrapper[4688]: E1125 12:51:15.066688 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83\": container with ID starting with 9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83 not found: ID does not exist" containerID="9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.066722 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83"} err="failed to get container status \"9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83\": rpc error: code = NotFound desc = could not find container \"9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83\": container with ID starting with 9d07e1e30cac86d8d063940839d7f4347078a553c59c7857fd84eb0663311a83 not found: ID does not exist" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.066747 4688 scope.go:117] "RemoveContainer" containerID="43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2" Nov 25 12:51:15 crc kubenswrapper[4688]: E1125 12:51:15.067212 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2\": container with ID starting with 43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2 not found: ID does not exist" containerID="43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.067376 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2"} err="failed to get container status \"43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2\": rpc error: code = NotFound desc = could not find container \"43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2\": container with ID starting with 43b9efbe9c0052a785f68f5016daa5aa251cd11e50fe1473ec4fb70d0cb59fa2 not found: ID does not exist" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.067515 4688 scope.go:117] "RemoveContainer" containerID="fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27" Nov 25 12:51:15 crc kubenswrapper[4688]: E1125 12:51:15.068074 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27\": container with ID starting with fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27 not found: ID does not exist" containerID="fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.068127 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27"} err="failed to get container status \"fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27\": rpc error: code = NotFound desc = could not find container \"fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27\": container with ID starting with fddba74cfb5beb9c43c72f553f55a3ef3a7e2ecfba5236398c03c9d5028dbf27 not found: ID does not exist" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.680491 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0890b099-15d0-4cd3-a05a-e1b542a42a22" (UID: "0890b099-15d0-4cd3-a05a-e1b542a42a22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.683012 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0890b099-15d0-4cd3-a05a-e1b542a42a22-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.908293 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h975x"] Nov 25 12:51:15 crc kubenswrapper[4688]: I1125 12:51:15.941833 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h975x"] Nov 25 12:51:16 crc kubenswrapper[4688]: I1125 12:51:16.750069 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" path="/var/lib/kubelet/pods/0890b099-15d0-4cd3-a05a-e1b542a42a22/volumes" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.790954 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-288kq"] Nov 25 12:52:04 crc kubenswrapper[4688]: E1125 12:52:04.792040 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerName="registry-server" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.792060 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerName="registry-server" Nov 25 12:52:04 crc kubenswrapper[4688]: E1125 12:52:04.792083 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerName="extract-content" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.792091 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerName="extract-content" Nov 25 12:52:04 crc kubenswrapper[4688]: E1125 12:52:04.792101 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerName="extract-utilities" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.792110 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerName="extract-utilities" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.792382 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0890b099-15d0-4cd3-a05a-e1b542a42a22" containerName="registry-server" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.794143 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.810225 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-288kq"] Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.889113 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ckk\" (UniqueName: \"kubernetes.io/projected/82d1c42a-ac23-4db4-8394-659985d5f487-kube-api-access-95ckk\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.889277 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-catalog-content\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.889337 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-utilities\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.991384 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-catalog-content\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.991495 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-utilities\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.991654 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ckk\" (UniqueName: \"kubernetes.io/projected/82d1c42a-ac23-4db4-8394-659985d5f487-kube-api-access-95ckk\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.991942 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-catalog-content\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:04 crc kubenswrapper[4688]: I1125 12:52:04.992223 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-utilities\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:05 crc kubenswrapper[4688]: I1125 12:52:05.012790 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ckk\" (UniqueName: \"kubernetes.io/projected/82d1c42a-ac23-4db4-8394-659985d5f487-kube-api-access-95ckk\") pod \"redhat-marketplace-288kq\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:05 crc kubenswrapper[4688]: I1125 12:52:05.138135 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:05 crc kubenswrapper[4688]: I1125 12:52:05.616776 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-288kq"] Nov 25 12:52:06 crc kubenswrapper[4688]: I1125 12:52:06.490462 4688 generic.go:334] "Generic (PLEG): container finished" podID="82d1c42a-ac23-4db4-8394-659985d5f487" containerID="e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e" exitCode=0 Nov 25 12:52:06 crc kubenswrapper[4688]: I1125 12:52:06.490518 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288kq" event={"ID":"82d1c42a-ac23-4db4-8394-659985d5f487","Type":"ContainerDied","Data":"e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e"} Nov 25 12:52:06 crc kubenswrapper[4688]: I1125 12:52:06.490573 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288kq" event={"ID":"82d1c42a-ac23-4db4-8394-659985d5f487","Type":"ContainerStarted","Data":"34171c052e6f601454464c62dfdb342f9826106d95aed80a17c14a42b3b4cd49"} Nov 25 12:52:07 crc kubenswrapper[4688]: I1125 12:52:07.501023 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288kq" event={"ID":"82d1c42a-ac23-4db4-8394-659985d5f487","Type":"ContainerStarted","Data":"8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234"} Nov 25 12:52:08 crc kubenswrapper[4688]: I1125 12:52:08.512497 4688 generic.go:334] "Generic (PLEG): container finished" podID="82d1c42a-ac23-4db4-8394-659985d5f487" containerID="8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234" exitCode=0 Nov 25 12:52:08 crc kubenswrapper[4688]: I1125 12:52:08.512590 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288kq" event={"ID":"82d1c42a-ac23-4db4-8394-659985d5f487","Type":"ContainerDied","Data":"8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234"} Nov 25 12:52:09 crc kubenswrapper[4688]: I1125 12:52:09.526914 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288kq" event={"ID":"82d1c42a-ac23-4db4-8394-659985d5f487","Type":"ContainerStarted","Data":"3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11"} Nov 25 12:52:09 crc kubenswrapper[4688]: I1125 12:52:09.546259 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-288kq" podStartSLOduration=3.021039485 podStartE2EDuration="5.546223031s" podCreationTimestamp="2025-11-25 12:52:04 +0000 UTC" firstStartedPulling="2025-11-25 12:52:06.493045146 +0000 UTC m=+2276.602674044" lastFinishedPulling="2025-11-25 12:52:09.018228702 +0000 UTC m=+2279.127857590" observedRunningTime="2025-11-25 12:52:09.541506746 +0000 UTC m=+2279.651135634" watchObservedRunningTime="2025-11-25 12:52:09.546223031 +0000 UTC m=+2279.655852119" Nov 25 12:52:15 crc kubenswrapper[4688]: I1125 12:52:15.139052 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:15 crc kubenswrapper[4688]: I1125 12:52:15.139618 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:15 crc kubenswrapper[4688]: I1125 12:52:15.195100 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:15 crc kubenswrapper[4688]: I1125 12:52:15.653247 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:15 crc kubenswrapper[4688]: I1125 12:52:15.719111 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-288kq"] Nov 25 12:52:17 crc kubenswrapper[4688]: I1125 12:52:17.606040 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-288kq" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" containerName="registry-server" containerID="cri-o://3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11" gracePeriod=2 Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.202980 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.369069 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-catalog-content\") pod \"82d1c42a-ac23-4db4-8394-659985d5f487\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.369144 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95ckk\" (UniqueName: \"kubernetes.io/projected/82d1c42a-ac23-4db4-8394-659985d5f487-kube-api-access-95ckk\") pod \"82d1c42a-ac23-4db4-8394-659985d5f487\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.369220 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-utilities\") pod \"82d1c42a-ac23-4db4-8394-659985d5f487\" (UID: \"82d1c42a-ac23-4db4-8394-659985d5f487\") " Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.370505 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-utilities" (OuterVolumeSpecName: "utilities") pod "82d1c42a-ac23-4db4-8394-659985d5f487" (UID: "82d1c42a-ac23-4db4-8394-659985d5f487"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.375782 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d1c42a-ac23-4db4-8394-659985d5f487-kube-api-access-95ckk" (OuterVolumeSpecName: "kube-api-access-95ckk") pod "82d1c42a-ac23-4db4-8394-659985d5f487" (UID: "82d1c42a-ac23-4db4-8394-659985d5f487"). InnerVolumeSpecName "kube-api-access-95ckk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.396093 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82d1c42a-ac23-4db4-8394-659985d5f487" (UID: "82d1c42a-ac23-4db4-8394-659985d5f487"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.471615 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.471675 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95ckk\" (UniqueName: \"kubernetes.io/projected/82d1c42a-ac23-4db4-8394-659985d5f487-kube-api-access-95ckk\") on node \"crc\" DevicePath \"\"" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.471690 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d1c42a-ac23-4db4-8394-659985d5f487-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.617042 4688 generic.go:334] "Generic (PLEG): container finished" podID="82d1c42a-ac23-4db4-8394-659985d5f487" containerID="3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11" exitCode=0 Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.617107 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288kq" event={"ID":"82d1c42a-ac23-4db4-8394-659985d5f487","Type":"ContainerDied","Data":"3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11"} Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.617146 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288kq" event={"ID":"82d1c42a-ac23-4db4-8394-659985d5f487","Type":"ContainerDied","Data":"34171c052e6f601454464c62dfdb342f9826106d95aed80a17c14a42b3b4cd49"} Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.617176 4688 scope.go:117] "RemoveContainer" containerID="3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.617262 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-288kq" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.636771 4688 scope.go:117] "RemoveContainer" containerID="8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.657166 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-288kq"] Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.667084 4688 scope.go:117] "RemoveContainer" containerID="e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.668023 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-288kq"] Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.712257 4688 scope.go:117] "RemoveContainer" containerID="3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11" Nov 25 12:52:18 crc kubenswrapper[4688]: E1125 12:52:18.712765 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11\": container with ID starting with 3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11 not found: ID does not exist" containerID="3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.712798 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11"} err="failed to get container status \"3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11\": rpc error: code = NotFound desc = could not find container \"3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11\": container with ID starting with 3cc3df8961630ec7b1cdee8ccf0a52e2f1fd53168ab08cbdbbff1618fddf2c11 not found: ID does not exist" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.712823 4688 scope.go:117] "RemoveContainer" containerID="8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234" Nov 25 12:52:18 crc kubenswrapper[4688]: E1125 12:52:18.713151 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234\": container with ID starting with 8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234 not found: ID does not exist" containerID="8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.713216 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234"} err="failed to get container status \"8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234\": rpc error: code = NotFound desc = could not find container \"8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234\": container with ID starting with 8b535c3407c28f80c0ea363aae7981d9dd92d2a84344b9e6326cfea3f39f6234 not found: ID does not exist" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.713259 4688 scope.go:117] "RemoveContainer" containerID="e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e" Nov 25 12:52:18 crc kubenswrapper[4688]: E1125 12:52:18.713597 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e\": container with ID starting with e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e not found: ID does not exist" containerID="e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.713627 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e"} err="failed to get container status \"e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e\": rpc error: code = NotFound desc = could not find container \"e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e\": container with ID starting with e5aebbdc51ad319ef03cccbda23527eb527c645973d5b68bbbf39e5ee590118e not found: ID does not exist" Nov 25 12:52:18 crc kubenswrapper[4688]: I1125 12:52:18.751003 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" path="/var/lib/kubelet/pods/82d1c42a-ac23-4db4-8394-659985d5f487/volumes" Nov 25 12:52:39 crc kubenswrapper[4688]: I1125 12:52:39.822579 4688 generic.go:334] "Generic (PLEG): container finished" podID="e884e12f-21a9-42e8-815e-78c0108842d8" containerID="fd99475b6101f6422e2c4836ea3a1243e8ecf9659df8230ac6183492edce59ac" exitCode=0 Nov 25 12:52:39 crc kubenswrapper[4688]: I1125 12:52:39.822673 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" event={"ID":"e884e12f-21a9-42e8-815e-78c0108842d8","Type":"ContainerDied","Data":"fd99475b6101f6422e2c4836ea3a1243e8ecf9659df8230ac6183492edce59ac"} Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.322438 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.433682 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-ssh-key\") pod \"e884e12f-21a9-42e8-815e-78c0108842d8\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.433746 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzzjm\" (UniqueName: \"kubernetes.io/projected/e884e12f-21a9-42e8-815e-78c0108842d8-kube-api-access-qzzjm\") pod \"e884e12f-21a9-42e8-815e-78c0108842d8\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.433787 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-secret-0\") pod \"e884e12f-21a9-42e8-815e-78c0108842d8\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.433851 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-inventory\") pod \"e884e12f-21a9-42e8-815e-78c0108842d8\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.433885 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-combined-ca-bundle\") pod \"e884e12f-21a9-42e8-815e-78c0108842d8\" (UID: \"e884e12f-21a9-42e8-815e-78c0108842d8\") " Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.440947 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e884e12f-21a9-42e8-815e-78c0108842d8" (UID: "e884e12f-21a9-42e8-815e-78c0108842d8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.444024 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e884e12f-21a9-42e8-815e-78c0108842d8-kube-api-access-qzzjm" (OuterVolumeSpecName: "kube-api-access-qzzjm") pod "e884e12f-21a9-42e8-815e-78c0108842d8" (UID: "e884e12f-21a9-42e8-815e-78c0108842d8"). InnerVolumeSpecName "kube-api-access-qzzjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.468032 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-inventory" (OuterVolumeSpecName: "inventory") pod "e884e12f-21a9-42e8-815e-78c0108842d8" (UID: "e884e12f-21a9-42e8-815e-78c0108842d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.474013 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e884e12f-21a9-42e8-815e-78c0108842d8" (UID: "e884e12f-21a9-42e8-815e-78c0108842d8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.496863 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e884e12f-21a9-42e8-815e-78c0108842d8" (UID: "e884e12f-21a9-42e8-815e-78c0108842d8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.537154 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.537209 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.537226 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.537240 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e884e12f-21a9-42e8-815e-78c0108842d8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.537254 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzzjm\" (UniqueName: \"kubernetes.io/projected/e884e12f-21a9-42e8-815e-78c0108842d8-kube-api-access-qzzjm\") on node \"crc\" DevicePath \"\"" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.846715 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" event={"ID":"e884e12f-21a9-42e8-815e-78c0108842d8","Type":"ContainerDied","Data":"77e23b236e575b03e75acd34c81793a7c9a0daa97e54cf8429cf85c1cdf78221"} Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.846764 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.846767 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77e23b236e575b03e75acd34c81793a7c9a0daa97e54cf8429cf85c1cdf78221" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.980812 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx"] Nov 25 12:52:41 crc kubenswrapper[4688]: E1125 12:52:41.982177 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" containerName="extract-utilities" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.982616 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" containerName="extract-utilities" Nov 25 12:52:41 crc kubenswrapper[4688]: E1125 12:52:41.982841 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e884e12f-21a9-42e8-815e-78c0108842d8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.982970 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e884e12f-21a9-42e8-815e-78c0108842d8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 12:52:41 crc kubenswrapper[4688]: E1125 12:52:41.983095 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" containerName="registry-server" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.983253 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" containerName="registry-server" Nov 25 12:52:41 crc kubenswrapper[4688]: E1125 12:52:41.983425 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" containerName="extract-content" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.983562 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" containerName="extract-content" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.984608 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d1c42a-ac23-4db4-8394-659985d5f487" containerName="registry-server" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.984828 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e884e12f-21a9-42e8-815e-78c0108842d8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.986565 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.990288 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.991865 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.991892 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.992127 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.993435 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.993972 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.994002 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:52:41 crc kubenswrapper[4688]: I1125 12:52:41.999469 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx"] Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.149918 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jx7\" (UniqueName: \"kubernetes.io/projected/79315b1a-e9e2-422c-8be3-97ebdb2038c0-kube-api-access-96jx7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.150227 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.150368 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.150438 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.150560 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.150643 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.150732 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.150788 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.150849 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252263 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252351 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252395 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252422 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252459 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252497 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96jx7\" (UniqueName: \"kubernetes.io/projected/79315b1a-e9e2-422c-8be3-97ebdb2038c0-kube-api-access-96jx7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252614 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252668 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.252710 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.253020 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.261236 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.261271 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.261470 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.261687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.261722 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.261970 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.261973 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.269820 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96jx7\" (UniqueName: \"kubernetes.io/projected/79315b1a-e9e2-422c-8be3-97ebdb2038c0-kube-api-access-96jx7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-z88rx\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.317548 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.814653 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx"] Nov 25 12:52:42 crc kubenswrapper[4688]: I1125 12:52:42.859058 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" event={"ID":"79315b1a-e9e2-422c-8be3-97ebdb2038c0","Type":"ContainerStarted","Data":"71b198de417b568b6ddb70825367090f6db310e36cc80582684f01058dfd286c"} Nov 25 12:52:43 crc kubenswrapper[4688]: I1125 12:52:43.870018 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" event={"ID":"79315b1a-e9e2-422c-8be3-97ebdb2038c0","Type":"ContainerStarted","Data":"72f95472ca30190bcaa3b8077cee3172acf71e4072eed2df9159f907c9e7acf6"} Nov 25 12:52:43 crc kubenswrapper[4688]: I1125 12:52:43.900980 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" podStartSLOduration=2.482167874 podStartE2EDuration="2.900958811s" podCreationTimestamp="2025-11-25 12:52:41 +0000 UTC" firstStartedPulling="2025-11-25 12:52:42.821141254 +0000 UTC m=+2312.930770122" lastFinishedPulling="2025-11-25 12:52:43.239932191 +0000 UTC m=+2313.349561059" observedRunningTime="2025-11-25 12:52:43.889418252 +0000 UTC m=+2313.999047150" watchObservedRunningTime="2025-11-25 12:52:43.900958811 +0000 UTC m=+2314.010587679" Nov 25 12:52:47 crc kubenswrapper[4688]: I1125 12:52:47.853480 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:52:47 crc kubenswrapper[4688]: I1125 12:52:47.853845 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:53:17 crc kubenswrapper[4688]: I1125 12:53:17.854257 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:53:17 crc kubenswrapper[4688]: I1125 12:53:17.856913 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:53:47 crc kubenswrapper[4688]: I1125 12:53:47.854327 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 12:53:47 crc kubenswrapper[4688]: I1125 12:53:47.854822 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 12:53:47 crc kubenswrapper[4688]: I1125 12:53:47.854867 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 12:53:47 crc kubenswrapper[4688]: I1125 12:53:47.855566 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 12:53:47 crc kubenswrapper[4688]: I1125 12:53:47.855631 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" gracePeriod=600 Nov 25 12:53:47 crc kubenswrapper[4688]: E1125 12:53:47.981707 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:53:48 crc kubenswrapper[4688]: I1125 12:53:48.524490 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" exitCode=0 Nov 25 12:53:48 crc kubenswrapper[4688]: I1125 12:53:48.524578 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6"} Nov 25 12:53:48 crc kubenswrapper[4688]: I1125 12:53:48.524909 4688 scope.go:117] "RemoveContainer" containerID="7e3cc73dc5c4be98ecb3f652f0f2f2650cf0ec554f71fedeb4b4d865be1081b7" Nov 25 12:53:48 crc kubenswrapper[4688]: I1125 12:53:48.526001 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:53:48 crc kubenswrapper[4688]: E1125 12:53:48.526590 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:54:01 crc kubenswrapper[4688]: I1125 12:54:01.740141 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:54:01 crc kubenswrapper[4688]: E1125 12:54:01.741007 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:54:14 crc kubenswrapper[4688]: I1125 12:54:14.739826 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:54:14 crc kubenswrapper[4688]: E1125 12:54:14.740676 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:54:25 crc kubenswrapper[4688]: I1125 12:54:25.739629 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:54:25 crc kubenswrapper[4688]: E1125 12:54:25.740443 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:54:37 crc kubenswrapper[4688]: I1125 12:54:37.739812 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:54:37 crc kubenswrapper[4688]: E1125 12:54:37.740575 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:54:48 crc kubenswrapper[4688]: I1125 12:54:48.740808 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:54:48 crc kubenswrapper[4688]: E1125 12:54:48.741590 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:55:01 crc kubenswrapper[4688]: I1125 12:55:01.740014 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:55:01 crc kubenswrapper[4688]: E1125 12:55:01.740992 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:55:15 crc kubenswrapper[4688]: I1125 12:55:15.740579 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:55:15 crc kubenswrapper[4688]: E1125 12:55:15.741352 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:55:29 crc kubenswrapper[4688]: I1125 12:55:29.741084 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:55:29 crc kubenswrapper[4688]: E1125 12:55:29.742013 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:55:30 crc kubenswrapper[4688]: I1125 12:55:30.991801 4688 generic.go:334] "Generic (PLEG): container finished" podID="79315b1a-e9e2-422c-8be3-97ebdb2038c0" containerID="72f95472ca30190bcaa3b8077cee3172acf71e4072eed2df9159f907c9e7acf6" exitCode=0 Nov 25 12:55:30 crc kubenswrapper[4688]: I1125 12:55:30.991888 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" event={"ID":"79315b1a-e9e2-422c-8be3-97ebdb2038c0","Type":"ContainerDied","Data":"72f95472ca30190bcaa3b8077cee3172acf71e4072eed2df9159f907c9e7acf6"} Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.499357 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-1\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625285 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-0\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625307 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-ssh-key\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625337 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-0\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625351 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-inventory\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625371 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-combined-ca-bundle\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625407 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-extra-config-0\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625637 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-1\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.625715 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96jx7\" (UniqueName: \"kubernetes.io/projected/79315b1a-e9e2-422c-8be3-97ebdb2038c0-kube-api-access-96jx7\") pod \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\" (UID: \"79315b1a-e9e2-422c-8be3-97ebdb2038c0\") " Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.631909 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.645264 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79315b1a-e9e2-422c-8be3-97ebdb2038c0-kube-api-access-96jx7" (OuterVolumeSpecName: "kube-api-access-96jx7") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "kube-api-access-96jx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.657896 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.657938 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-inventory" (OuterVolumeSpecName: "inventory") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.660677 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.671818 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.676711 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.684495 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.685770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "79315b1a-e9e2-422c-8be3-97ebdb2038c0" (UID: "79315b1a-e9e2-422c-8be3-97ebdb2038c0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728253 4688 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728297 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96jx7\" (UniqueName: \"kubernetes.io/projected/79315b1a-e9e2-422c-8be3-97ebdb2038c0-kube-api-access-96jx7\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728310 4688 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728323 4688 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728336 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728348 4688 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728360 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728372 4688 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:32 crc kubenswrapper[4688]: I1125 12:55:32.728385 4688 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/79315b1a-e9e2-422c-8be3-97ebdb2038c0-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.015735 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" event={"ID":"79315b1a-e9e2-422c-8be3-97ebdb2038c0","Type":"ContainerDied","Data":"71b198de417b568b6ddb70825367090f6db310e36cc80582684f01058dfd286c"} Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.016081 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b198de417b568b6ddb70825367090f6db310e36cc80582684f01058dfd286c" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.015906 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-z88rx" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.104501 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7"] Nov 25 12:55:33 crc kubenswrapper[4688]: E1125 12:55:33.104925 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79315b1a-e9e2-422c-8be3-97ebdb2038c0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.104944 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="79315b1a-e9e2-422c-8be3-97ebdb2038c0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.105139 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="79315b1a-e9e2-422c-8be3-97ebdb2038c0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.105828 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.109189 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.109475 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.109611 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6vgvx" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.109707 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.110184 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.117078 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7"] Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.236939 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.237009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.237159 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.237199 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.237235 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.237356 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.237445 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9hp\" (UniqueName: \"kubernetes.io/projected/34495de9-ab63-49d9-b01f-a07ec58b7a3f-kube-api-access-rc9hp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.339445 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.339587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc9hp\" (UniqueName: \"kubernetes.io/projected/34495de9-ab63-49d9-b01f-a07ec58b7a3f-kube-api-access-rc9hp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.339733 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.339762 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.339826 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.339850 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.339870 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.343259 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.343418 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.343945 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.344382 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.345587 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.345842 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.358088 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc9hp\" (UniqueName: \"kubernetes.io/projected/34495de9-ab63-49d9-b01f-a07ec58b7a3f-kube-api-access-rc9hp\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-65sz7\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.428867 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.937975 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7"] Nov 25 12:55:33 crc kubenswrapper[4688]: I1125 12:55:33.940758 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 12:55:34 crc kubenswrapper[4688]: I1125 12:55:34.025122 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" event={"ID":"34495de9-ab63-49d9-b01f-a07ec58b7a3f","Type":"ContainerStarted","Data":"e379f73990e06e811f594da3857f5e7ce75acbaff4b5bf3d9fc37295c212abd8"} Nov 25 12:55:36 crc kubenswrapper[4688]: I1125 12:55:36.046615 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" event={"ID":"34495de9-ab63-49d9-b01f-a07ec58b7a3f","Type":"ContainerStarted","Data":"01c32b68c7a0e3d1456b4b116826d42027bf12d635b31f427614272cf122a617"} Nov 25 12:55:36 crc kubenswrapper[4688]: I1125 12:55:36.078012 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" podStartSLOduration=1.911765675 podStartE2EDuration="3.077985738s" podCreationTimestamp="2025-11-25 12:55:33 +0000 UTC" firstStartedPulling="2025-11-25 12:55:33.940440684 +0000 UTC m=+2484.050069552" lastFinishedPulling="2025-11-25 12:55:35.106660747 +0000 UTC m=+2485.216289615" observedRunningTime="2025-11-25 12:55:36.064699201 +0000 UTC m=+2486.174328089" watchObservedRunningTime="2025-11-25 12:55:36.077985738 +0000 UTC m=+2486.187614606" Nov 25 12:55:41 crc kubenswrapper[4688]: I1125 12:55:41.739435 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:55:41 crc kubenswrapper[4688]: E1125 12:55:41.740399 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:55:52 crc kubenswrapper[4688]: I1125 12:55:52.739987 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:55:52 crc kubenswrapper[4688]: E1125 12:55:52.741786 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:56:07 crc kubenswrapper[4688]: I1125 12:56:07.739603 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:56:07 crc kubenswrapper[4688]: E1125 12:56:07.740289 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:56:21 crc kubenswrapper[4688]: I1125 12:56:21.740676 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:56:21 crc kubenswrapper[4688]: E1125 12:56:21.742193 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:56:35 crc kubenswrapper[4688]: I1125 12:56:35.740624 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:56:35 crc kubenswrapper[4688]: E1125 12:56:35.741269 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:56:47 crc kubenswrapper[4688]: I1125 12:56:47.742722 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:56:47 crc kubenswrapper[4688]: E1125 12:56:47.743955 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:56:59 crc kubenswrapper[4688]: I1125 12:56:59.740510 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:56:59 crc kubenswrapper[4688]: E1125 12:56:59.741282 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:57:12 crc kubenswrapper[4688]: I1125 12:57:12.739811 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:57:12 crc kubenswrapper[4688]: E1125 12:57:12.740600 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:57:23 crc kubenswrapper[4688]: I1125 12:57:23.741399 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:57:23 crc kubenswrapper[4688]: E1125 12:57:23.743103 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:57:36 crc kubenswrapper[4688]: I1125 12:57:36.741378 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:57:36 crc kubenswrapper[4688]: E1125 12:57:36.742429 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:57:51 crc kubenswrapper[4688]: I1125 12:57:51.739839 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:57:51 crc kubenswrapper[4688]: E1125 12:57:51.740590 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:58:01 crc kubenswrapper[4688]: I1125 12:58:01.472382 4688 generic.go:334] "Generic (PLEG): container finished" podID="34495de9-ab63-49d9-b01f-a07ec58b7a3f" containerID="01c32b68c7a0e3d1456b4b116826d42027bf12d635b31f427614272cf122a617" exitCode=0 Nov 25 12:58:01 crc kubenswrapper[4688]: I1125 12:58:01.472489 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" event={"ID":"34495de9-ab63-49d9-b01f-a07ec58b7a3f","Type":"ContainerDied","Data":"01c32b68c7a0e3d1456b4b116826d42027bf12d635b31f427614272cf122a617"} Nov 25 12:58:02 crc kubenswrapper[4688]: I1125 12:58:02.740624 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:58:02 crc kubenswrapper[4688]: E1125 12:58:02.741337 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.002770 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.127889 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-telemetry-combined-ca-bundle\") pod \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.128000 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-inventory\") pod \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.128106 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ssh-key\") pod \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.129107 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-1\") pod \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.129244 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-2\") pod \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.129577 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-0\") pod \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.129625 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc9hp\" (UniqueName: \"kubernetes.io/projected/34495de9-ab63-49d9-b01f-a07ec58b7a3f-kube-api-access-rc9hp\") pod \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\" (UID: \"34495de9-ab63-49d9-b01f-a07ec58b7a3f\") " Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.133992 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "34495de9-ab63-49d9-b01f-a07ec58b7a3f" (UID: "34495de9-ab63-49d9-b01f-a07ec58b7a3f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.136117 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34495de9-ab63-49d9-b01f-a07ec58b7a3f-kube-api-access-rc9hp" (OuterVolumeSpecName: "kube-api-access-rc9hp") pod "34495de9-ab63-49d9-b01f-a07ec58b7a3f" (UID: "34495de9-ab63-49d9-b01f-a07ec58b7a3f"). InnerVolumeSpecName "kube-api-access-rc9hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.167837 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-inventory" (OuterVolumeSpecName: "inventory") pod "34495de9-ab63-49d9-b01f-a07ec58b7a3f" (UID: "34495de9-ab63-49d9-b01f-a07ec58b7a3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.186347 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "34495de9-ab63-49d9-b01f-a07ec58b7a3f" (UID: "34495de9-ab63-49d9-b01f-a07ec58b7a3f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.188939 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "34495de9-ab63-49d9-b01f-a07ec58b7a3f" (UID: "34495de9-ab63-49d9-b01f-a07ec58b7a3f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.191347 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "34495de9-ab63-49d9-b01f-a07ec58b7a3f" (UID: "34495de9-ab63-49d9-b01f-a07ec58b7a3f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.194325 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "34495de9-ab63-49d9-b01f-a07ec58b7a3f" (UID: "34495de9-ab63-49d9-b01f-a07ec58b7a3f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.231567 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.231611 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.231634 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.231652 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc9hp\" (UniqueName: \"kubernetes.io/projected/34495de9-ab63-49d9-b01f-a07ec58b7a3f-kube-api-access-rc9hp\") on node \"crc\" DevicePath \"\"" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.231671 4688 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.231690 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.231705 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34495de9-ab63-49d9-b01f-a07ec58b7a3f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.503342 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" event={"ID":"34495de9-ab63-49d9-b01f-a07ec58b7a3f","Type":"ContainerDied","Data":"e379f73990e06e811f594da3857f5e7ce75acbaff4b5bf3d9fc37295c212abd8"} Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.503413 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e379f73990e06e811f594da3857f5e7ce75acbaff4b5bf3d9fc37295c212abd8" Nov 25 12:58:03 crc kubenswrapper[4688]: I1125 12:58:03.503716 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-65sz7" Nov 25 12:58:16 crc kubenswrapper[4688]: I1125 12:58:16.740325 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:58:16 crc kubenswrapper[4688]: E1125 12:58:16.741592 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:58:29 crc kubenswrapper[4688]: I1125 12:58:29.740110 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:58:29 crc kubenswrapper[4688]: E1125 12:58:29.741009 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:58:41 crc kubenswrapper[4688]: I1125 12:58:41.740509 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:58:41 crc kubenswrapper[4688]: E1125 12:58:41.741585 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 12:58:52 crc kubenswrapper[4688]: I1125 12:58:52.744058 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 12:58:53 crc kubenswrapper[4688]: I1125 12:58:53.064807 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"d7f6010fb0df9c2e667539baa0196010cf78e5e21c863cb43d0f9c102bd523c5"} Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.176634 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86"] Nov 25 13:00:00 crc kubenswrapper[4688]: E1125 13:00:00.178016 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34495de9-ab63-49d9-b01f-a07ec58b7a3f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.178037 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="34495de9-ab63-49d9-b01f-a07ec58b7a3f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.178570 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="34495de9-ab63-49d9-b01f-a07ec58b7a3f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.179681 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.183637 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.184590 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.214070 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86"] Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.298905 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75dcd43f-bc0b-40da-9208-515bc53f7935-config-volume\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.299054 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87pnt\" (UniqueName: \"kubernetes.io/projected/75dcd43f-bc0b-40da-9208-515bc53f7935-kube-api-access-87pnt\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.299094 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75dcd43f-bc0b-40da-9208-515bc53f7935-secret-volume\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.400960 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87pnt\" (UniqueName: \"kubernetes.io/projected/75dcd43f-bc0b-40da-9208-515bc53f7935-kube-api-access-87pnt\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.401043 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75dcd43f-bc0b-40da-9208-515bc53f7935-secret-volume\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.401116 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75dcd43f-bc0b-40da-9208-515bc53f7935-config-volume\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.402248 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75dcd43f-bc0b-40da-9208-515bc53f7935-config-volume\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.412402 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75dcd43f-bc0b-40da-9208-515bc53f7935-secret-volume\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.418353 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87pnt\" (UniqueName: \"kubernetes.io/projected/75dcd43f-bc0b-40da-9208-515bc53f7935-kube-api-access-87pnt\") pod \"collect-profiles-29401260-8fk86\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.516058 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:00 crc kubenswrapper[4688]: I1125 13:00:00.950756 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86"] Nov 25 13:00:00 crc kubenswrapper[4688]: W1125 13:00:00.954924 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75dcd43f_bc0b_40da_9208_515bc53f7935.slice/crio-c58b7d9a29540364c471ffc768a0dd8bcd2bf3f5f345b9cfd06a78f16cfe29f9 WatchSource:0}: Error finding container c58b7d9a29540364c471ffc768a0dd8bcd2bf3f5f345b9cfd06a78f16cfe29f9: Status 404 returned error can't find the container with id c58b7d9a29540364c471ffc768a0dd8bcd2bf3f5f345b9cfd06a78f16cfe29f9 Nov 25 13:00:01 crc kubenswrapper[4688]: I1125 13:00:01.724052 4688 generic.go:334] "Generic (PLEG): container finished" podID="75dcd43f-bc0b-40da-9208-515bc53f7935" containerID="fe7095ccca3c9922a23519b6363b083316b79cc3988dff47a8f98e265f0172ae" exitCode=0 Nov 25 13:00:01 crc kubenswrapper[4688]: I1125 13:00:01.724094 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" event={"ID":"75dcd43f-bc0b-40da-9208-515bc53f7935","Type":"ContainerDied","Data":"fe7095ccca3c9922a23519b6363b083316b79cc3988dff47a8f98e265f0172ae"} Nov 25 13:00:01 crc kubenswrapper[4688]: I1125 13:00:01.724470 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" event={"ID":"75dcd43f-bc0b-40da-9208-515bc53f7935","Type":"ContainerStarted","Data":"c58b7d9a29540364c471ffc768a0dd8bcd2bf3f5f345b9cfd06a78f16cfe29f9"} Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.105372 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.248255 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75dcd43f-bc0b-40da-9208-515bc53f7935-config-volume\") pod \"75dcd43f-bc0b-40da-9208-515bc53f7935\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.248337 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75dcd43f-bc0b-40da-9208-515bc53f7935-secret-volume\") pod \"75dcd43f-bc0b-40da-9208-515bc53f7935\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.248441 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87pnt\" (UniqueName: \"kubernetes.io/projected/75dcd43f-bc0b-40da-9208-515bc53f7935-kube-api-access-87pnt\") pod \"75dcd43f-bc0b-40da-9208-515bc53f7935\" (UID: \"75dcd43f-bc0b-40da-9208-515bc53f7935\") " Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.249197 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75dcd43f-bc0b-40da-9208-515bc53f7935-config-volume" (OuterVolumeSpecName: "config-volume") pod "75dcd43f-bc0b-40da-9208-515bc53f7935" (UID: "75dcd43f-bc0b-40da-9208-515bc53f7935"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.254372 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75dcd43f-bc0b-40da-9208-515bc53f7935-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75dcd43f-bc0b-40da-9208-515bc53f7935" (UID: "75dcd43f-bc0b-40da-9208-515bc53f7935"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.254856 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dcd43f-bc0b-40da-9208-515bc53f7935-kube-api-access-87pnt" (OuterVolumeSpecName: "kube-api-access-87pnt") pod "75dcd43f-bc0b-40da-9208-515bc53f7935" (UID: "75dcd43f-bc0b-40da-9208-515bc53f7935"). InnerVolumeSpecName "kube-api-access-87pnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.350476 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75dcd43f-bc0b-40da-9208-515bc53f7935-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.350538 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75dcd43f-bc0b-40da-9208-515bc53f7935-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.350559 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87pnt\" (UniqueName: \"kubernetes.io/projected/75dcd43f-bc0b-40da-9208-515bc53f7935-kube-api-access-87pnt\") on node \"crc\" DevicePath \"\"" Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.741566 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" event={"ID":"75dcd43f-bc0b-40da-9208-515bc53f7935","Type":"ContainerDied","Data":"c58b7d9a29540364c471ffc768a0dd8bcd2bf3f5f345b9cfd06a78f16cfe29f9"} Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.741934 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c58b7d9a29540364c471ffc768a0dd8bcd2bf3f5f345b9cfd06a78f16cfe29f9" Nov 25 13:00:03 crc kubenswrapper[4688]: I1125 13:00:03.741716 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401260-8fk86" Nov 25 13:00:04 crc kubenswrapper[4688]: I1125 13:00:04.216863 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6"] Nov 25 13:00:04 crc kubenswrapper[4688]: I1125 13:00:04.233564 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401215-sc2w6"] Nov 25 13:00:04 crc kubenswrapper[4688]: I1125 13:00:04.752513 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694823ec-105f-4183-9dfb-8fa7f414c8ac" path="/var/lib/kubelet/pods/694823ec-105f-4183-9dfb-8fa7f414c8ac/volumes" Nov 25 13:00:31 crc kubenswrapper[4688]: I1125 13:00:31.871501 4688 scope.go:117] "RemoveContainer" containerID="a1057226e36ba81ad338e64d5cb85c59996230ee07efc7d6ac6df1f36c24fc46" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.005129 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nk6jc"] Nov 25 13:00:52 crc kubenswrapper[4688]: E1125 13:00:52.006209 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75dcd43f-bc0b-40da-9208-515bc53f7935" containerName="collect-profiles" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.006225 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="75dcd43f-bc0b-40da-9208-515bc53f7935" containerName="collect-profiles" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.006460 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="75dcd43f-bc0b-40da-9208-515bc53f7935" containerName="collect-profiles" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.008187 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.016136 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nk6jc"] Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.034638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-catalog-content\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.034822 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhs2v\" (UniqueName: \"kubernetes.io/projected/dd818eac-cbc6-445b-8c7a-5f9c39c65057-kube-api-access-rhs2v\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.034934 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-utilities\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.136369 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-catalog-content\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.136482 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhs2v\" (UniqueName: \"kubernetes.io/projected/dd818eac-cbc6-445b-8c7a-5f9c39c65057-kube-api-access-rhs2v\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.136561 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-utilities\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.137125 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-utilities\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.138547 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-catalog-content\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.167702 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhs2v\" (UniqueName: \"kubernetes.io/projected/dd818eac-cbc6-445b-8c7a-5f9c39c65057-kube-api-access-rhs2v\") pod \"community-operators-nk6jc\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.347419 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:00:52 crc kubenswrapper[4688]: I1125 13:00:52.871034 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nk6jc"] Nov 25 13:00:53 crc kubenswrapper[4688]: I1125 13:00:53.253922 4688 generic.go:334] "Generic (PLEG): container finished" podID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerID="177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1" exitCode=0 Nov 25 13:00:53 crc kubenswrapper[4688]: I1125 13:00:53.254166 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk6jc" event={"ID":"dd818eac-cbc6-445b-8c7a-5f9c39c65057","Type":"ContainerDied","Data":"177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1"} Nov 25 13:00:53 crc kubenswrapper[4688]: I1125 13:00:53.254263 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk6jc" event={"ID":"dd818eac-cbc6-445b-8c7a-5f9c39c65057","Type":"ContainerStarted","Data":"f44e0fed34a2d2a79ae599c41ef95db5e7b48195e0110b2c4c9835a6e91b8cc7"} Nov 25 13:00:53 crc kubenswrapper[4688]: I1125 13:00:53.255586 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 13:00:54 crc kubenswrapper[4688]: I1125 13:00:54.265124 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk6jc" event={"ID":"dd818eac-cbc6-445b-8c7a-5f9c39c65057","Type":"ContainerStarted","Data":"d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b"} Nov 25 13:00:55 crc kubenswrapper[4688]: I1125 13:00:55.279824 4688 generic.go:334] "Generic (PLEG): container finished" podID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerID="d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b" exitCode=0 Nov 25 13:00:55 crc kubenswrapper[4688]: I1125 13:00:55.279922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk6jc" event={"ID":"dd818eac-cbc6-445b-8c7a-5f9c39c65057","Type":"ContainerDied","Data":"d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b"} Nov 25 13:00:56 crc kubenswrapper[4688]: I1125 13:00:56.293246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk6jc" event={"ID":"dd818eac-cbc6-445b-8c7a-5f9c39c65057","Type":"ContainerStarted","Data":"55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d"} Nov 25 13:00:56 crc kubenswrapper[4688]: I1125 13:00:56.312067 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nk6jc" podStartSLOduration=2.877682944 podStartE2EDuration="5.312047018s" podCreationTimestamp="2025-11-25 13:00:51 +0000 UTC" firstStartedPulling="2025-11-25 13:00:53.255308366 +0000 UTC m=+2803.364937234" lastFinishedPulling="2025-11-25 13:00:55.68967244 +0000 UTC m=+2805.799301308" observedRunningTime="2025-11-25 13:00:56.310461585 +0000 UTC m=+2806.420090473" watchObservedRunningTime="2025-11-25 13:00:56.312047018 +0000 UTC m=+2806.421675896" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.158582 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401261-jmm7d"] Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.160221 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.179279 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401261-jmm7d"] Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.296818 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-combined-ca-bundle\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.297203 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-fernet-keys\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.297240 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrcnj\" (UniqueName: \"kubernetes.io/projected/a3a1b93f-b907-499e-b150-f2627f93b4b2-kube-api-access-rrcnj\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.297322 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-config-data\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.318677 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/2.log" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.398622 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-combined-ca-bundle\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.398737 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-fernet-keys\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.398780 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrcnj\" (UniqueName: \"kubernetes.io/projected/a3a1b93f-b907-499e-b150-f2627f93b4b2-kube-api-access-rrcnj\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.398889 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-config-data\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.406894 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-fernet-keys\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.408107 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-config-data\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.412342 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-combined-ca-bundle\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.418575 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrcnj\" (UniqueName: \"kubernetes.io/projected/a3a1b93f-b907-499e-b150-f2627f93b4b2-kube-api-access-rrcnj\") pod \"keystone-cron-29401261-jmm7d\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.482179 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:00 crc kubenswrapper[4688]: I1125 13:01:00.953819 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401261-jmm7d"] Nov 25 13:01:01 crc kubenswrapper[4688]: I1125 13:01:01.342154 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401261-jmm7d" event={"ID":"a3a1b93f-b907-499e-b150-f2627f93b4b2","Type":"ContainerStarted","Data":"1e75b612202036494d23d4be4d60ab4f0a0e2a5afab59b4c613af1b5a47371ad"} Nov 25 13:01:01 crc kubenswrapper[4688]: I1125 13:01:01.344470 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401261-jmm7d" event={"ID":"a3a1b93f-b907-499e-b150-f2627f93b4b2","Type":"ContainerStarted","Data":"ff1078578074a39ad67985f4fd461538879bade5b7944325258ce4d264abfd15"} Nov 25 13:01:01 crc kubenswrapper[4688]: I1125 13:01:01.375923 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401261-jmm7d" podStartSLOduration=1.375901343 podStartE2EDuration="1.375901343s" podCreationTimestamp="2025-11-25 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:01:01.363710626 +0000 UTC m=+2811.473339494" watchObservedRunningTime="2025-11-25 13:01:01.375901343 +0000 UTC m=+2811.485530221" Nov 25 13:01:01 crc kubenswrapper[4688]: I1125 13:01:01.404060 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/1.log" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.348013 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.348312 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.424962 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.804165 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.804380 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" containerName="openstackclient" containerID="cri-o://67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c" gracePeriod=2 Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.820461 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.846886 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 13:01:02 crc kubenswrapper[4688]: E1125 13:01:02.847493 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" containerName="openstackclient" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.847550 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" containerName="openstackclient" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.847872 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" containerName="openstackclient" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.848836 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.852131 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" podUID="c22633a0-aeed-4e1d-a178-05b245f91b77" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.859539 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.959657 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.959891 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjwx\" (UniqueName: \"kubernetes.io/projected/c22633a0-aeed-4e1d-a178-05b245f91b77-kube-api-access-5zjwx\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.959990 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config-secret\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:02 crc kubenswrapper[4688]: I1125 13:01:02.960034 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.061906 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zjwx\" (UniqueName: \"kubernetes.io/projected/c22633a0-aeed-4e1d-a178-05b245f91b77-kube-api-access-5zjwx\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.061978 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config-secret\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.062003 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.062080 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.062985 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.067853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config-secret\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.067868 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.076584 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zjwx\" (UniqueName: \"kubernetes.io/projected/c22633a0-aeed-4e1d-a178-05b245f91b77-kube-api-access-5zjwx\") pod \"openstackclient\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.168663 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.360395 4688 generic.go:334] "Generic (PLEG): container finished" podID="a3a1b93f-b907-499e-b150-f2627f93b4b2" containerID="1e75b612202036494d23d4be4d60ab4f0a0e2a5afab59b4c613af1b5a47371ad" exitCode=0 Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.360577 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401261-jmm7d" event={"ID":"a3a1b93f-b907-499e-b150-f2627f93b4b2","Type":"ContainerDied","Data":"1e75b612202036494d23d4be4d60ab4f0a0e2a5afab59b4c613af1b5a47371ad"} Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.427987 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.489428 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nk6jc"] Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.661989 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 13:01:03 crc kubenswrapper[4688]: W1125 13:01:03.663770 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc22633a0_aeed_4e1d_a178_05b245f91b77.slice/crio-03346e2772ae5c3c953d52bc08b036497d555bd96ec45b1cb0d2f565f4d08d28 WatchSource:0}: Error finding container 03346e2772ae5c3c953d52bc08b036497d555bd96ec45b1cb0d2f565f4d08d28: Status 404 returned error can't find the container with id 03346e2772ae5c3c953d52bc08b036497d555bd96ec45b1cb0d2f565f4d08d28 Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.897895 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-gkjms"] Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.899442 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.913737 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gkjms"] Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.926072 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqp5p\" (UniqueName: \"kubernetes.io/projected/d61628bc-7d4e-4548-9770-6f764e6b89d7-kube-api-access-cqp5p\") pod \"aodh-db-create-gkjms\" (UID: \"d61628bc-7d4e-4548-9770-6f764e6b89d7\") " pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:03 crc kubenswrapper[4688]: I1125 13:01:03.926308 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d61628bc-7d4e-4548-9770-6f764e6b89d7-operator-scripts\") pod \"aodh-db-create-gkjms\" (UID: \"d61628bc-7d4e-4548-9770-6f764e6b89d7\") " pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.006605 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1f34-account-create-zgm8c"] Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.008446 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.010363 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.027503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqp5p\" (UniqueName: \"kubernetes.io/projected/d61628bc-7d4e-4548-9770-6f764e6b89d7-kube-api-access-cqp5p\") pod \"aodh-db-create-gkjms\" (UID: \"d61628bc-7d4e-4548-9770-6f764e6b89d7\") " pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.027621 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bb311ca-6585-4cc6-8b68-23755a207433-operator-scripts\") pod \"aodh-1f34-account-create-zgm8c\" (UID: \"9bb311ca-6585-4cc6-8b68-23755a207433\") " pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.027675 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnf8l\" (UniqueName: \"kubernetes.io/projected/9bb311ca-6585-4cc6-8b68-23755a207433-kube-api-access-qnf8l\") pod \"aodh-1f34-account-create-zgm8c\" (UID: \"9bb311ca-6585-4cc6-8b68-23755a207433\") " pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.027864 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d61628bc-7d4e-4548-9770-6f764e6b89d7-operator-scripts\") pod \"aodh-db-create-gkjms\" (UID: \"d61628bc-7d4e-4548-9770-6f764e6b89d7\") " pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.029008 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d61628bc-7d4e-4548-9770-6f764e6b89d7-operator-scripts\") pod \"aodh-db-create-gkjms\" (UID: \"d61628bc-7d4e-4548-9770-6f764e6b89d7\") " pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.029062 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1f34-account-create-zgm8c"] Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.049806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqp5p\" (UniqueName: \"kubernetes.io/projected/d61628bc-7d4e-4548-9770-6f764e6b89d7-kube-api-access-cqp5p\") pod \"aodh-db-create-gkjms\" (UID: \"d61628bc-7d4e-4548-9770-6f764e6b89d7\") " pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.129758 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bb311ca-6585-4cc6-8b68-23755a207433-operator-scripts\") pod \"aodh-1f34-account-create-zgm8c\" (UID: \"9bb311ca-6585-4cc6-8b68-23755a207433\") " pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.129852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnf8l\" (UniqueName: \"kubernetes.io/projected/9bb311ca-6585-4cc6-8b68-23755a207433-kube-api-access-qnf8l\") pod \"aodh-1f34-account-create-zgm8c\" (UID: \"9bb311ca-6585-4cc6-8b68-23755a207433\") " pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.130875 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bb311ca-6585-4cc6-8b68-23755a207433-operator-scripts\") pod \"aodh-1f34-account-create-zgm8c\" (UID: \"9bb311ca-6585-4cc6-8b68-23755a207433\") " pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.150571 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnf8l\" (UniqueName: \"kubernetes.io/projected/9bb311ca-6585-4cc6-8b68-23755a207433-kube-api-access-qnf8l\") pod \"aodh-1f34-account-create-zgm8c\" (UID: \"9bb311ca-6585-4cc6-8b68-23755a207433\") " pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.217416 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.326889 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.387126 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c22633a0-aeed-4e1d-a178-05b245f91b77","Type":"ContainerStarted","Data":"c1efd3478df99f65c6a797aea85e0e1dd8346805300a3e9a19c89c410fa6a14f"} Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.387481 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c22633a0-aeed-4e1d-a178-05b245f91b77","Type":"ContainerStarted","Data":"03346e2772ae5c3c953d52bc08b036497d555bd96ec45b1cb0d2f565f4d08d28"} Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.418365 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.418247169 podStartE2EDuration="2.418247169s" podCreationTimestamp="2025-11-25 13:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:01:04.402646542 +0000 UTC m=+2814.512275410" watchObservedRunningTime="2025-11-25 13:01:04.418247169 +0000 UTC m=+2814.527876037" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.727647 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gkjms"] Nov 25 13:01:04 crc kubenswrapper[4688]: W1125 13:01:04.732213 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61628bc_7d4e_4548_9770_6f764e6b89d7.slice/crio-6181046482d24f0f05c6b057eb32c80ee20dd1e735a33f90910e002ea0c4fc1d WatchSource:0}: Error finding container 6181046482d24f0f05c6b057eb32c80ee20dd1e735a33f90910e002ea0c4fc1d: Status 404 returned error can't find the container with id 6181046482d24f0f05c6b057eb32c80ee20dd1e735a33f90910e002ea0c4fc1d Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.820821 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.884743 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1f34-account-create-zgm8c"] Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.946981 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-fernet-keys\") pod \"a3a1b93f-b907-499e-b150-f2627f93b4b2\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.947102 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-config-data\") pod \"a3a1b93f-b907-499e-b150-f2627f93b4b2\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.947273 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrcnj\" (UniqueName: \"kubernetes.io/projected/a3a1b93f-b907-499e-b150-f2627f93b4b2-kube-api-access-rrcnj\") pod \"a3a1b93f-b907-499e-b150-f2627f93b4b2\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.947361 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-combined-ca-bundle\") pod \"a3a1b93f-b907-499e-b150-f2627f93b4b2\" (UID: \"a3a1b93f-b907-499e-b150-f2627f93b4b2\") " Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.952397 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3a1b93f-b907-499e-b150-f2627f93b4b2-kube-api-access-rrcnj" (OuterVolumeSpecName: "kube-api-access-rrcnj") pod "a3a1b93f-b907-499e-b150-f2627f93b4b2" (UID: "a3a1b93f-b907-499e-b150-f2627f93b4b2"). InnerVolumeSpecName "kube-api-access-rrcnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.954954 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a3a1b93f-b907-499e-b150-f2627f93b4b2" (UID: "a3a1b93f-b907-499e-b150-f2627f93b4b2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:04 crc kubenswrapper[4688]: I1125 13:01:04.981450 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3a1b93f-b907-499e-b150-f2627f93b4b2" (UID: "a3a1b93f-b907-499e-b150-f2627f93b4b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.011134 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-config-data" (OuterVolumeSpecName: "config-data") pod "a3a1b93f-b907-499e-b150-f2627f93b4b2" (UID: "a3a1b93f-b907-499e-b150-f2627f93b4b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.049835 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.049862 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.049872 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrcnj\" (UniqueName: \"kubernetes.io/projected/a3a1b93f-b907-499e-b150-f2627f93b4b2-kube-api-access-rrcnj\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.049883 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a1b93f-b907-499e-b150-f2627f93b4b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.062516 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.252454 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config-secret\") pod \"fe63e1cf-543e-46d0-a4f8-0144f2201219\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.252960 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config\") pod \"fe63e1cf-543e-46d0-a4f8-0144f2201219\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.253001 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-combined-ca-bundle\") pod \"fe63e1cf-543e-46d0-a4f8-0144f2201219\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.253035 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52vnj\" (UniqueName: \"kubernetes.io/projected/fe63e1cf-543e-46d0-a4f8-0144f2201219-kube-api-access-52vnj\") pod \"fe63e1cf-543e-46d0-a4f8-0144f2201219\" (UID: \"fe63e1cf-543e-46d0-a4f8-0144f2201219\") " Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.266550 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe63e1cf-543e-46d0-a4f8-0144f2201219-kube-api-access-52vnj" (OuterVolumeSpecName: "kube-api-access-52vnj") pod "fe63e1cf-543e-46d0-a4f8-0144f2201219" (UID: "fe63e1cf-543e-46d0-a4f8-0144f2201219"). InnerVolumeSpecName "kube-api-access-52vnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.288663 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe63e1cf-543e-46d0-a4f8-0144f2201219" (UID: "fe63e1cf-543e-46d0-a4f8-0144f2201219"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.300747 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fe63e1cf-543e-46d0-a4f8-0144f2201219" (UID: "fe63e1cf-543e-46d0-a4f8-0144f2201219"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.313700 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fe63e1cf-543e-46d0-a4f8-0144f2201219" (UID: "fe63e1cf-543e-46d0-a4f8-0144f2201219"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.355123 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.355160 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fe63e1cf-543e-46d0-a4f8-0144f2201219-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.355169 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe63e1cf-543e-46d0-a4f8-0144f2201219-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.355177 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52vnj\" (UniqueName: \"kubernetes.io/projected/fe63e1cf-543e-46d0-a4f8-0144f2201219-kube-api-access-52vnj\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.403685 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gkjms" event={"ID":"d61628bc-7d4e-4548-9770-6f764e6b89d7","Type":"ContainerStarted","Data":"4d88f20743144536d0d8acb93a7fa9f20ce40b802bc34157f29b28071f6b4f6a"} Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.403763 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gkjms" event={"ID":"d61628bc-7d4e-4548-9770-6f764e6b89d7","Type":"ContainerStarted","Data":"6181046482d24f0f05c6b057eb32c80ee20dd1e735a33f90910e002ea0c4fc1d"} Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.406179 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1f34-account-create-zgm8c" event={"ID":"9bb311ca-6585-4cc6-8b68-23755a207433","Type":"ContainerStarted","Data":"3558d08d969ac34da660840cce76beb6985f13c32155a0dfe79f485947ec9ad4"} Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.406214 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1f34-account-create-zgm8c" event={"ID":"9bb311ca-6585-4cc6-8b68-23755a207433","Type":"ContainerStarted","Data":"76c44a9cd16b2dced1c1b52ca056604a3d01d8ef6125fd8c430e831b91400e2c"} Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.409274 4688 generic.go:334] "Generic (PLEG): container finished" podID="fe63e1cf-543e-46d0-a4f8-0144f2201219" containerID="67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c" exitCode=137 Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.409337 4688 scope.go:117] "RemoveContainer" containerID="67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.409423 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.413767 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401261-jmm7d" event={"ID":"a3a1b93f-b907-499e-b150-f2627f93b4b2","Type":"ContainerDied","Data":"ff1078578074a39ad67985f4fd461538879bade5b7944325258ce4d264abfd15"} Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.413815 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff1078578074a39ad67985f4fd461538879bade5b7944325258ce4d264abfd15" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.413979 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nk6jc" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerName="registry-server" containerID="cri-o://55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d" gracePeriod=2 Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.414122 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401261-jmm7d" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.453872 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" podUID="c22633a0-aeed-4e1d-a178-05b245f91b77" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.455012 4688 scope.go:117] "RemoveContainer" containerID="67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.455334 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-gkjms" podStartSLOduration=2.45532451 podStartE2EDuration="2.45532451s" podCreationTimestamp="2025-11-25 13:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:01:05.424794202 +0000 UTC m=+2815.534423090" watchObservedRunningTime="2025-11-25 13:01:05.45532451 +0000 UTC m=+2815.564953368" Nov 25 13:01:05 crc kubenswrapper[4688]: E1125 13:01:05.456419 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c\": container with ID starting with 67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c not found: ID does not exist" containerID="67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.456467 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c"} err="failed to get container status \"67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c\": rpc error: code = NotFound desc = could not find container \"67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c\": container with ID starting with 67a29641f6848118871f4c00ef0a31738da82b9ca8e437641b92f67902ed745c not found: ID does not exist" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.462117 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-1f34-account-create-zgm8c" podStartSLOduration=2.462095982 podStartE2EDuration="2.462095982s" podCreationTimestamp="2025-11-25 13:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:01:05.449692779 +0000 UTC m=+2815.559321647" watchObservedRunningTime="2025-11-25 13:01:05.462095982 +0000 UTC m=+2815.571724850" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.820591 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.969722 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhs2v\" (UniqueName: \"kubernetes.io/projected/dd818eac-cbc6-445b-8c7a-5f9c39c65057-kube-api-access-rhs2v\") pod \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.969786 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-utilities\") pod \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.969969 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-catalog-content\") pod \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\" (UID: \"dd818eac-cbc6-445b-8c7a-5f9c39c65057\") " Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.970876 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-utilities" (OuterVolumeSpecName: "utilities") pod "dd818eac-cbc6-445b-8c7a-5f9c39c65057" (UID: "dd818eac-cbc6-445b-8c7a-5f9c39c65057"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:01:05 crc kubenswrapper[4688]: I1125 13:01:05.973799 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd818eac-cbc6-445b-8c7a-5f9c39c65057-kube-api-access-rhs2v" (OuterVolumeSpecName: "kube-api-access-rhs2v") pod "dd818eac-cbc6-445b-8c7a-5f9c39c65057" (UID: "dd818eac-cbc6-445b-8c7a-5f9c39c65057"). InnerVolumeSpecName "kube-api-access-rhs2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.022009 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd818eac-cbc6-445b-8c7a-5f9c39c65057" (UID: "dd818eac-cbc6-445b-8c7a-5f9c39c65057"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.072956 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.073253 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhs2v\" (UniqueName: \"kubernetes.io/projected/dd818eac-cbc6-445b-8c7a-5f9c39c65057-kube-api-access-rhs2v\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.073345 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818eac-cbc6-445b-8c7a-5f9c39c65057-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.425418 4688 generic.go:334] "Generic (PLEG): container finished" podID="d61628bc-7d4e-4548-9770-6f764e6b89d7" containerID="4d88f20743144536d0d8acb93a7fa9f20ce40b802bc34157f29b28071f6b4f6a" exitCode=0 Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.425502 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gkjms" event={"ID":"d61628bc-7d4e-4548-9770-6f764e6b89d7","Type":"ContainerDied","Data":"4d88f20743144536d0d8acb93a7fa9f20ce40b802bc34157f29b28071f6b4f6a"} Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.427566 4688 generic.go:334] "Generic (PLEG): container finished" podID="9bb311ca-6585-4cc6-8b68-23755a207433" containerID="3558d08d969ac34da660840cce76beb6985f13c32155a0dfe79f485947ec9ad4" exitCode=0 Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.427635 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1f34-account-create-zgm8c" event={"ID":"9bb311ca-6585-4cc6-8b68-23755a207433","Type":"ContainerDied","Data":"3558d08d969ac34da660840cce76beb6985f13c32155a0dfe79f485947ec9ad4"} Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.431987 4688 generic.go:334] "Generic (PLEG): container finished" podID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerID="55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d" exitCode=0 Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.432031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk6jc" event={"ID":"dd818eac-cbc6-445b-8c7a-5f9c39c65057","Type":"ContainerDied","Data":"55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d"} Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.432075 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk6jc" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.432107 4688 scope.go:117] "RemoveContainer" containerID="55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.432092 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk6jc" event={"ID":"dd818eac-cbc6-445b-8c7a-5f9c39c65057","Type":"ContainerDied","Data":"f44e0fed34a2d2a79ae599c41ef95db5e7b48195e0110b2c4c9835a6e91b8cc7"} Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.467866 4688 scope.go:117] "RemoveContainer" containerID="d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.510946 4688 scope.go:117] "RemoveContainer" containerID="177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.519680 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nk6jc"] Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.531239 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nk6jc"] Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.566655 4688 scope.go:117] "RemoveContainer" containerID="55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d" Nov 25 13:01:06 crc kubenswrapper[4688]: E1125 13:01:06.567337 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d\": container with ID starting with 55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d not found: ID does not exist" containerID="55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.567392 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d"} err="failed to get container status \"55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d\": rpc error: code = NotFound desc = could not find container \"55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d\": container with ID starting with 55c6ec75f47cb2437a48b85f40fda73c214cc9464075841874e945f8451c9f5d not found: ID does not exist" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.567424 4688 scope.go:117] "RemoveContainer" containerID="d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b" Nov 25 13:01:06 crc kubenswrapper[4688]: E1125 13:01:06.567850 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b\": container with ID starting with d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b not found: ID does not exist" containerID="d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.567904 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b"} err="failed to get container status \"d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b\": rpc error: code = NotFound desc = could not find container \"d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b\": container with ID starting with d15fde167364f463101e37784a7568028c82c73e1bdba951dcff05a06498599b not found: ID does not exist" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.567941 4688 scope.go:117] "RemoveContainer" containerID="177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1" Nov 25 13:01:06 crc kubenswrapper[4688]: E1125 13:01:06.568263 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1\": container with ID starting with 177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1 not found: ID does not exist" containerID="177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.568373 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1"} err="failed to get container status \"177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1\": rpc error: code = NotFound desc = could not find container \"177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1\": container with ID starting with 177598b0a3107f44c1de1cb87516d2c31f76264007b418679709d0ed61bdb5b1 not found: ID does not exist" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.754783 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" path="/var/lib/kubelet/pods/dd818eac-cbc6-445b-8c7a-5f9c39c65057/volumes" Nov 25 13:01:06 crc kubenswrapper[4688]: I1125 13:01:06.756264 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe63e1cf-543e-46d0-a4f8-0144f2201219" path="/var/lib/kubelet/pods/fe63e1cf-543e-46d0-a4f8-0144f2201219/volumes" Nov 25 13:01:07 crc kubenswrapper[4688]: I1125 13:01:07.892996 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:07 crc kubenswrapper[4688]: I1125 13:01:07.900815 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.016684 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqp5p\" (UniqueName: \"kubernetes.io/projected/d61628bc-7d4e-4548-9770-6f764e6b89d7-kube-api-access-cqp5p\") pod \"d61628bc-7d4e-4548-9770-6f764e6b89d7\" (UID: \"d61628bc-7d4e-4548-9770-6f764e6b89d7\") " Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.017054 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bb311ca-6585-4cc6-8b68-23755a207433-operator-scripts\") pod \"9bb311ca-6585-4cc6-8b68-23755a207433\" (UID: \"9bb311ca-6585-4cc6-8b68-23755a207433\") " Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.017112 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnf8l\" (UniqueName: \"kubernetes.io/projected/9bb311ca-6585-4cc6-8b68-23755a207433-kube-api-access-qnf8l\") pod \"9bb311ca-6585-4cc6-8b68-23755a207433\" (UID: \"9bb311ca-6585-4cc6-8b68-23755a207433\") " Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.017158 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d61628bc-7d4e-4548-9770-6f764e6b89d7-operator-scripts\") pod \"d61628bc-7d4e-4548-9770-6f764e6b89d7\" (UID: \"d61628bc-7d4e-4548-9770-6f764e6b89d7\") " Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.017676 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb311ca-6585-4cc6-8b68-23755a207433-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bb311ca-6585-4cc6-8b68-23755a207433" (UID: "9bb311ca-6585-4cc6-8b68-23755a207433"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.017727 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61628bc-7d4e-4548-9770-6f764e6b89d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d61628bc-7d4e-4548-9770-6f764e6b89d7" (UID: "d61628bc-7d4e-4548-9770-6f764e6b89d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.023780 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb311ca-6585-4cc6-8b68-23755a207433-kube-api-access-qnf8l" (OuterVolumeSpecName: "kube-api-access-qnf8l") pod "9bb311ca-6585-4cc6-8b68-23755a207433" (UID: "9bb311ca-6585-4cc6-8b68-23755a207433"). InnerVolumeSpecName "kube-api-access-qnf8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.023846 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61628bc-7d4e-4548-9770-6f764e6b89d7-kube-api-access-cqp5p" (OuterVolumeSpecName: "kube-api-access-cqp5p") pod "d61628bc-7d4e-4548-9770-6f764e6b89d7" (UID: "d61628bc-7d4e-4548-9770-6f764e6b89d7"). InnerVolumeSpecName "kube-api-access-cqp5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.119816 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqp5p\" (UniqueName: \"kubernetes.io/projected/d61628bc-7d4e-4548-9770-6f764e6b89d7-kube-api-access-cqp5p\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.119845 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bb311ca-6585-4cc6-8b68-23755a207433-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.119854 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnf8l\" (UniqueName: \"kubernetes.io/projected/9bb311ca-6585-4cc6-8b68-23755a207433-kube-api-access-qnf8l\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.119865 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d61628bc-7d4e-4548-9770-6f764e6b89d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.462564 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gkjms" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.462615 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gkjms" event={"ID":"d61628bc-7d4e-4548-9770-6f764e6b89d7","Type":"ContainerDied","Data":"6181046482d24f0f05c6b057eb32c80ee20dd1e735a33f90910e002ea0c4fc1d"} Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.463135 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6181046482d24f0f05c6b057eb32c80ee20dd1e735a33f90910e002ea0c4fc1d" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.464787 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1f34-account-create-zgm8c" event={"ID":"9bb311ca-6585-4cc6-8b68-23755a207433","Type":"ContainerDied","Data":"76c44a9cd16b2dced1c1b52ca056604a3d01d8ef6125fd8c430e831b91400e2c"} Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.464822 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1f34-account-create-zgm8c" Nov 25 13:01:08 crc kubenswrapper[4688]: I1125 13:01:08.464827 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76c44a9cd16b2dced1c1b52ca056604a3d01d8ef6125fd8c430e831b91400e2c" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.338335 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-pl44z"] Nov 25 13:01:09 crc kubenswrapper[4688]: E1125 13:01:09.339941 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerName="registry-server" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.340037 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerName="registry-server" Nov 25 13:01:09 crc kubenswrapper[4688]: E1125 13:01:09.340106 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb311ca-6585-4cc6-8b68-23755a207433" containerName="mariadb-account-create" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.340168 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb311ca-6585-4cc6-8b68-23755a207433" containerName="mariadb-account-create" Nov 25 13:01:09 crc kubenswrapper[4688]: E1125 13:01:09.340248 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerName="extract-utilities" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.340300 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerName="extract-utilities" Nov 25 13:01:09 crc kubenswrapper[4688]: E1125 13:01:09.340364 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerName="extract-content" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.340420 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerName="extract-content" Nov 25 13:01:09 crc kubenswrapper[4688]: E1125 13:01:09.340483 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61628bc-7d4e-4548-9770-6f764e6b89d7" containerName="mariadb-database-create" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.340571 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61628bc-7d4e-4548-9770-6f764e6b89d7" containerName="mariadb-database-create" Nov 25 13:01:09 crc kubenswrapper[4688]: E1125 13:01:09.340662 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a1b93f-b907-499e-b150-f2627f93b4b2" containerName="keystone-cron" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.340730 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a1b93f-b907-499e-b150-f2627f93b4b2" containerName="keystone-cron" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.340956 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd818eac-cbc6-445b-8c7a-5f9c39c65057" containerName="registry-server" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.341051 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a1b93f-b907-499e-b150-f2627f93b4b2" containerName="keystone-cron" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.341122 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb311ca-6585-4cc6-8b68-23755a207433" containerName="mariadb-account-create" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.341190 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61628bc-7d4e-4548-9770-6f764e6b89d7" containerName="mariadb-database-create" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.341987 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.344914 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tn6ss" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.344995 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.345099 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.347868 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.349515 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-scripts\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.349597 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-combined-ca-bundle\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.349627 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q67tt\" (UniqueName: \"kubernetes.io/projected/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-kube-api-access-q67tt\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.349717 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-config-data\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.360727 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pl44z"] Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.453117 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-config-data\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.453281 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-scripts\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.453311 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-combined-ca-bundle\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.453334 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q67tt\" (UniqueName: \"kubernetes.io/projected/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-kube-api-access-q67tt\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.458798 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-combined-ca-bundle\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.458981 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-config-data\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.459939 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-scripts\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.471493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q67tt\" (UniqueName: \"kubernetes.io/projected/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-kube-api-access-q67tt\") pod \"aodh-db-sync-pl44z\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:09 crc kubenswrapper[4688]: I1125 13:01:09.708304 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:10 crc kubenswrapper[4688]: I1125 13:01:10.192048 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pl44z"] Nov 25 13:01:10 crc kubenswrapper[4688]: I1125 13:01:10.489367 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pl44z" event={"ID":"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd","Type":"ContainerStarted","Data":"e39388f118974bdd89cdf0bd1edcfa2137b3b4fa681a4a5d4c108be6800e0136"} Nov 25 13:01:13 crc kubenswrapper[4688]: I1125 13:01:13.909408 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 13:01:14 crc kubenswrapper[4688]: I1125 13:01:14.530318 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pl44z" event={"ID":"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd","Type":"ContainerStarted","Data":"cd962b477a3b6dac04821d20211041570f585dbbef34a00cb719e68a1b8a8320"} Nov 25 13:01:14 crc kubenswrapper[4688]: I1125 13:01:14.549001 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-pl44z" podStartSLOduration=1.822371602 podStartE2EDuration="5.548982174s" podCreationTimestamp="2025-11-25 13:01:09 +0000 UTC" firstStartedPulling="2025-11-25 13:01:10.179560216 +0000 UTC m=+2820.289189084" lastFinishedPulling="2025-11-25 13:01:13.906170798 +0000 UTC m=+2824.015799656" observedRunningTime="2025-11-25 13:01:14.547316079 +0000 UTC m=+2824.656944947" watchObservedRunningTime="2025-11-25 13:01:14.548982174 +0000 UTC m=+2824.658611052" Nov 25 13:01:16 crc kubenswrapper[4688]: I1125 13:01:16.556746 4688 generic.go:334] "Generic (PLEG): container finished" podID="5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" containerID="cd962b477a3b6dac04821d20211041570f585dbbef34a00cb719e68a1b8a8320" exitCode=0 Nov 25 13:01:16 crc kubenswrapper[4688]: I1125 13:01:16.556808 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pl44z" event={"ID":"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd","Type":"ContainerDied","Data":"cd962b477a3b6dac04821d20211041570f585dbbef34a00cb719e68a1b8a8320"} Nov 25 13:01:17 crc kubenswrapper[4688]: I1125 13:01:17.856874 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:01:17 crc kubenswrapper[4688]: I1125 13:01:17.857273 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:01:17 crc kubenswrapper[4688]: I1125 13:01:17.960191 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.125736 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-combined-ca-bundle\") pod \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.125798 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q67tt\" (UniqueName: \"kubernetes.io/projected/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-kube-api-access-q67tt\") pod \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.125887 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-config-data\") pod \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.126036 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-scripts\") pod \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\" (UID: \"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd\") " Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.133289 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-kube-api-access-q67tt" (OuterVolumeSpecName: "kube-api-access-q67tt") pod "5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" (UID: "5c25d128-ae77-4b24-a447-c1a3ecd0d9bd"). InnerVolumeSpecName "kube-api-access-q67tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.133951 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-scripts" (OuterVolumeSpecName: "scripts") pod "5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" (UID: "5c25d128-ae77-4b24-a447-c1a3ecd0d9bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.170120 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" (UID: "5c25d128-ae77-4b24-a447-c1a3ecd0d9bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.171776 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-config-data" (OuterVolumeSpecName: "config-data") pod "5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" (UID: "5c25d128-ae77-4b24-a447-c1a3ecd0d9bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.228699 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.229006 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.229188 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q67tt\" (UniqueName: \"kubernetes.io/projected/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-kube-api-access-q67tt\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.229320 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.578444 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pl44z" event={"ID":"5c25d128-ae77-4b24-a447-c1a3ecd0d9bd","Type":"ContainerDied","Data":"e39388f118974bdd89cdf0bd1edcfa2137b3b4fa681a4a5d4c108be6800e0136"} Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.578480 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e39388f118974bdd89cdf0bd1edcfa2137b3b4fa681a4a5d4c108be6800e0136" Nov 25 13:01:18 crc kubenswrapper[4688]: I1125 13:01:18.578837 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pl44z" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.443430 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:19 crc kubenswrapper[4688]: E1125 13:01:19.444316 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" containerName="aodh-db-sync" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.444333 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" containerName="aodh-db-sync" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.444580 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" containerName="aodh-db-sync" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.448704 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.450919 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tn6ss" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.453445 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.455312 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.460359 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.553451 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-scripts\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.553561 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-config-data\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.553623 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.553677 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhgb5\" (UniqueName: \"kubernetes.io/projected/f8c4bfb9-b922-4796-84d3-7790bb92901b-kube-api-access-xhgb5\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.655370 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-scripts\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.655424 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-config-data\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.655478 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.655518 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhgb5\" (UniqueName: \"kubernetes.io/projected/f8c4bfb9-b922-4796-84d3-7790bb92901b-kube-api-access-xhgb5\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.659225 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.674141 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-scripts\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.674429 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-config-data\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.677352 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhgb5\" (UniqueName: \"kubernetes.io/projected/f8c4bfb9-b922-4796-84d3-7790bb92901b-kube-api-access-xhgb5\") pod \"aodh-0\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " pod="openstack/aodh-0" Nov 25 13:01:19 crc kubenswrapper[4688]: I1125 13:01:19.766142 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:01:20 crc kubenswrapper[4688]: I1125 13:01:20.266080 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:20 crc kubenswrapper[4688]: W1125 13:01:20.266795 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8c4bfb9_b922_4796_84d3_7790bb92901b.slice/crio-bcb87ba54db2be4c53fb87f2aa531dbb29fc84c0f9a62714ef5837b7c8ba033e WatchSource:0}: Error finding container bcb87ba54db2be4c53fb87f2aa531dbb29fc84c0f9a62714ef5837b7c8ba033e: Status 404 returned error can't find the container with id bcb87ba54db2be4c53fb87f2aa531dbb29fc84c0f9a62714ef5837b7c8ba033e Nov 25 13:01:20 crc kubenswrapper[4688]: I1125 13:01:20.597304 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerStarted","Data":"bcb87ba54db2be4c53fb87f2aa531dbb29fc84c0f9a62714ef5837b7c8ba033e"} Nov 25 13:01:21 crc kubenswrapper[4688]: I1125 13:01:21.605542 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerStarted","Data":"b20ecbe010f123232184433aa0f3ed141020683ad61978aab3360a0cbcb038a8"} Nov 25 13:01:21 crc kubenswrapper[4688]: I1125 13:01:21.712827 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:21 crc kubenswrapper[4688]: I1125 13:01:21.713303 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="ceilometer-central-agent" containerID="cri-o://2a5b2c6fb017b90db99af962c030313a2325be2e77a490f556b03742e615c82b" gracePeriod=30 Nov 25 13:01:21 crc kubenswrapper[4688]: I1125 13:01:21.713407 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="ceilometer-notification-agent" containerID="cri-o://30ecb35dcc075763385580b26358f737ba35793620b8275036ec25c1295e5f63" gracePeriod=30 Nov 25 13:01:21 crc kubenswrapper[4688]: I1125 13:01:21.713355 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="sg-core" containerID="cri-o://5adf4b165b15721c18c6595f30305bb4ae7d04310ef875e28f0c48f90d0a89ea" gracePeriod=30 Nov 25 13:01:21 crc kubenswrapper[4688]: I1125 13:01:21.713366 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="proxy-httpd" containerID="cri-o://3825f0b8c07447b613011ba2b8d4e7847c7052cad963261b4dba73f89723a13e" gracePeriod=30 Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.530826 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.616537 4688 generic.go:334] "Generic (PLEG): container finished" podID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerID="3825f0b8c07447b613011ba2b8d4e7847c7052cad963261b4dba73f89723a13e" exitCode=0 Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.616569 4688 generic.go:334] "Generic (PLEG): container finished" podID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerID="5adf4b165b15721c18c6595f30305bb4ae7d04310ef875e28f0c48f90d0a89ea" exitCode=2 Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.616577 4688 generic.go:334] "Generic (PLEG): container finished" podID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerID="30ecb35dcc075763385580b26358f737ba35793620b8275036ec25c1295e5f63" exitCode=0 Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.616584 4688 generic.go:334] "Generic (PLEG): container finished" podID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerID="2a5b2c6fb017b90db99af962c030313a2325be2e77a490f556b03742e615c82b" exitCode=0 Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.616603 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerDied","Data":"3825f0b8c07447b613011ba2b8d4e7847c7052cad963261b4dba73f89723a13e"} Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.616629 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerDied","Data":"5adf4b165b15721c18c6595f30305bb4ae7d04310ef875e28f0c48f90d0a89ea"} Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.616639 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerDied","Data":"30ecb35dcc075763385580b26358f737ba35793620b8275036ec25c1295e5f63"} Nov 25 13:01:22 crc kubenswrapper[4688]: I1125 13:01:22.616650 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerDied","Data":"2a5b2c6fb017b90db99af962c030313a2325be2e77a490f556b03742e615c82b"} Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.132653 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.328015 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-ceilometer-tls-certs\") pod \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.328137 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdkht\" (UniqueName: \"kubernetes.io/projected/22a16017-15b1-40ff-89f3-d1ae2620d6f4-kube-api-access-qdkht\") pod \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.328223 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-scripts\") pod \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.328251 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-combined-ca-bundle\") pod \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.328313 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-log-httpd\") pod \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.328348 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-config-data\") pod \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.328414 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-run-httpd\") pod \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.328460 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-sg-core-conf-yaml\") pod \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\" (UID: \"22a16017-15b1-40ff-89f3-d1ae2620d6f4\") " Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.329918 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "22a16017-15b1-40ff-89f3-d1ae2620d6f4" (UID: "22a16017-15b1-40ff-89f3-d1ae2620d6f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.330151 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "22a16017-15b1-40ff-89f3-d1ae2620d6f4" (UID: "22a16017-15b1-40ff-89f3-d1ae2620d6f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.342724 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-scripts" (OuterVolumeSpecName: "scripts") pod "22a16017-15b1-40ff-89f3-d1ae2620d6f4" (UID: "22a16017-15b1-40ff-89f3-d1ae2620d6f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.342883 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a16017-15b1-40ff-89f3-d1ae2620d6f4-kube-api-access-qdkht" (OuterVolumeSpecName: "kube-api-access-qdkht") pod "22a16017-15b1-40ff-89f3-d1ae2620d6f4" (UID: "22a16017-15b1-40ff-89f3-d1ae2620d6f4"). InnerVolumeSpecName "kube-api-access-qdkht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.371648 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "22a16017-15b1-40ff-89f3-d1ae2620d6f4" (UID: "22a16017-15b1-40ff-89f3-d1ae2620d6f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.407671 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "22a16017-15b1-40ff-89f3-d1ae2620d6f4" (UID: "22a16017-15b1-40ff-89f3-d1ae2620d6f4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.426890 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22a16017-15b1-40ff-89f3-d1ae2620d6f4" (UID: "22a16017-15b1-40ff-89f3-d1ae2620d6f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.432252 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.432291 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.432307 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.432318 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22a16017-15b1-40ff-89f3-d1ae2620d6f4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.432329 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.432338 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.432349 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdkht\" (UniqueName: \"kubernetes.io/projected/22a16017-15b1-40ff-89f3-d1ae2620d6f4-kube-api-access-qdkht\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.461748 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-config-data" (OuterVolumeSpecName: "config-data") pod "22a16017-15b1-40ff-89f3-d1ae2620d6f4" (UID: "22a16017-15b1-40ff-89f3-d1ae2620d6f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.534743 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22a16017-15b1-40ff-89f3-d1ae2620d6f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.628850 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22a16017-15b1-40ff-89f3-d1ae2620d6f4","Type":"ContainerDied","Data":"a6b8ba93044548658e941d96e41c477c5ef7a52fa2f036548fe68fb375d71652"} Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.628905 4688 scope.go:117] "RemoveContainer" containerID="3825f0b8c07447b613011ba2b8d4e7847c7052cad963261b4dba73f89723a13e" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.629042 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.649256 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerStarted","Data":"2e3560145c9137977db4245281aee12c51cf294927abae2e54788bf2673edba8"} Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.678249 4688 scope.go:117] "RemoveContainer" containerID="5adf4b165b15721c18c6595f30305bb4ae7d04310ef875e28f0c48f90d0a89ea" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.693262 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.713358 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.725748 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:23 crc kubenswrapper[4688]: E1125 13:01:23.726234 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="ceilometer-notification-agent" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.726254 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="ceilometer-notification-agent" Nov 25 13:01:23 crc kubenswrapper[4688]: E1125 13:01:23.726322 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="ceilometer-central-agent" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.726332 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="ceilometer-central-agent" Nov 25 13:01:23 crc kubenswrapper[4688]: E1125 13:01:23.726357 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="proxy-httpd" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.726366 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="proxy-httpd" Nov 25 13:01:23 crc kubenswrapper[4688]: E1125 13:01:23.726391 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="sg-core" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.726399 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="sg-core" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.726696 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="ceilometer-notification-agent" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.726718 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="sg-core" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.726743 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="ceilometer-central-agent" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.726760 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" containerName="proxy-httpd" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.728967 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.732210 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.732393 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.733381 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.741257 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.742547 4688 scope.go:117] "RemoveContainer" containerID="30ecb35dcc075763385580b26358f737ba35793620b8275036ec25c1295e5f63" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.756506 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:23 crc kubenswrapper[4688]: E1125 13:01:23.760813 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-zcfbx log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="3b6f804c-bd87-4f36-9554-405fda9c91b9" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.776780 4688 scope.go:117] "RemoveContainer" containerID="2a5b2c6fb017b90db99af962c030313a2325be2e77a490f556b03742e615c82b" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.844961 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.845081 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-run-httpd\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.845498 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-log-httpd\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.845610 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.845674 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-scripts\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.845714 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-config-data\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.845773 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfbx\" (UniqueName: \"kubernetes.io/projected/3b6f804c-bd87-4f36-9554-405fda9c91b9-kube-api-access-zcfbx\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.845856 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.948136 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-log-httpd\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.948252 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.948310 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-scripts\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.948350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-config-data\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.948421 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfbx\" (UniqueName: \"kubernetes.io/projected/3b6f804c-bd87-4f36-9554-405fda9c91b9-kube-api-access-zcfbx\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.948509 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.948607 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.948658 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-run-httpd\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.949346 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-run-httpd\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.949752 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-log-httpd\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.953987 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.954101 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.954766 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-config-data\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.955326 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-scripts\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.958119 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:23 crc kubenswrapper[4688]: I1125 13:01:23.976932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfbx\" (UniqueName: \"kubernetes.io/projected/3b6f804c-bd87-4f36-9554-405fda9c91b9-kube-api-access-zcfbx\") pod \"ceilometer-0\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " pod="openstack/ceilometer-0" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.663087 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerStarted","Data":"7780b2531c6d8893783c7e5297259c7e85a54e5dd24e914e7531007c03966e07"} Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.664757 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.676382 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.750337 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22a16017-15b1-40ff-89f3-d1ae2620d6f4" path="/var/lib/kubelet/pods/22a16017-15b1-40ff-89f3-d1ae2620d6f4/volumes" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.870856 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-sg-core-conf-yaml\") pod \"3b6f804c-bd87-4f36-9554-405fda9c91b9\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.870937 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-log-httpd\") pod \"3b6f804c-bd87-4f36-9554-405fda9c91b9\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.871007 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-config-data\") pod \"3b6f804c-bd87-4f36-9554-405fda9c91b9\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.871033 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-run-httpd\") pod \"3b6f804c-bd87-4f36-9554-405fda9c91b9\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.871068 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-combined-ca-bundle\") pod \"3b6f804c-bd87-4f36-9554-405fda9c91b9\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.871104 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-scripts\") pod \"3b6f804c-bd87-4f36-9554-405fda9c91b9\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.871129 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-ceilometer-tls-certs\") pod \"3b6f804c-bd87-4f36-9554-405fda9c91b9\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.871225 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcfbx\" (UniqueName: \"kubernetes.io/projected/3b6f804c-bd87-4f36-9554-405fda9c91b9-kube-api-access-zcfbx\") pod \"3b6f804c-bd87-4f36-9554-405fda9c91b9\" (UID: \"3b6f804c-bd87-4f36-9554-405fda9c91b9\") " Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.872946 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b6f804c-bd87-4f36-9554-405fda9c91b9" (UID: "3b6f804c-bd87-4f36-9554-405fda9c91b9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.873002 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b6f804c-bd87-4f36-9554-405fda9c91b9" (UID: "3b6f804c-bd87-4f36-9554-405fda9c91b9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.879722 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-scripts" (OuterVolumeSpecName: "scripts") pod "3b6f804c-bd87-4f36-9554-405fda9c91b9" (UID: "3b6f804c-bd87-4f36-9554-405fda9c91b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.879879 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b6f804c-bd87-4f36-9554-405fda9c91b9" (UID: "3b6f804c-bd87-4f36-9554-405fda9c91b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.882664 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b6f804c-bd87-4f36-9554-405fda9c91b9" (UID: "3b6f804c-bd87-4f36-9554-405fda9c91b9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.885649 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-config-data" (OuterVolumeSpecName: "config-data") pod "3b6f804c-bd87-4f36-9554-405fda9c91b9" (UID: "3b6f804c-bd87-4f36-9554-405fda9c91b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.891833 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6f804c-bd87-4f36-9554-405fda9c91b9-kube-api-access-zcfbx" (OuterVolumeSpecName: "kube-api-access-zcfbx") pod "3b6f804c-bd87-4f36-9554-405fda9c91b9" (UID: "3b6f804c-bd87-4f36-9554-405fda9c91b9"). InnerVolumeSpecName "kube-api-access-zcfbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.906277 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3b6f804c-bd87-4f36-9554-405fda9c91b9" (UID: "3b6f804c-bd87-4f36-9554-405fda9c91b9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.974786 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.974815 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.974825 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.974833 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b6f804c-bd87-4f36-9554-405fda9c91b9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.974845 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.974853 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.974861 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b6f804c-bd87-4f36-9554-405fda9c91b9-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:24 crc kubenswrapper[4688]: I1125 13:01:24.974870 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcfbx\" (UniqueName: \"kubernetes.io/projected/3b6f804c-bd87-4f36-9554-405fda9c91b9-kube-api-access-zcfbx\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.672917 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.734640 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.751794 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.767826 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.773786 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.777799 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.778412 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.778595 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.780972 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.791220 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-config-data\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.791278 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/519ebfe6-d6c7-4205-b0f6-23be8445474f-log-httpd\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.791321 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.791373 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/519ebfe6-d6c7-4205-b0f6-23be8445474f-run-httpd\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.791446 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.791479 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.791495 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-scripts\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.791558 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pjnv\" (UniqueName: \"kubernetes.io/projected/519ebfe6-d6c7-4205-b0f6-23be8445474f-kube-api-access-4pjnv\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.893393 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pjnv\" (UniqueName: \"kubernetes.io/projected/519ebfe6-d6c7-4205-b0f6-23be8445474f-kube-api-access-4pjnv\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.893489 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-config-data\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.893517 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/519ebfe6-d6c7-4205-b0f6-23be8445474f-log-httpd\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.893657 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.893703 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/519ebfe6-d6c7-4205-b0f6-23be8445474f-run-httpd\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.893742 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.893777 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.893792 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-scripts\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.894303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/519ebfe6-d6c7-4205-b0f6-23be8445474f-log-httpd\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.894309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/519ebfe6-d6c7-4205-b0f6-23be8445474f-run-httpd\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.897857 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-config-data\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.898034 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.898248 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-scripts\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.899732 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.901559 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/519ebfe6-d6c7-4205-b0f6-23be8445474f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:25 crc kubenswrapper[4688]: I1125 13:01:25.915863 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pjnv\" (UniqueName: \"kubernetes.io/projected/519ebfe6-d6c7-4205-b0f6-23be8445474f-kube-api-access-4pjnv\") pod \"ceilometer-0\" (UID: \"519ebfe6-d6c7-4205-b0f6-23be8445474f\") " pod="openstack/ceilometer-0" Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.016925 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.541619 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 13:01:26 crc kubenswrapper[4688]: W1125 13:01:26.551321 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod519ebfe6_d6c7_4205_b0f6_23be8445474f.slice/crio-6435c53bee8b3c7fbc55eb54573c7241dbe3b00f8d3e255f61a3e576cfa5377e WatchSource:0}: Error finding container 6435c53bee8b3c7fbc55eb54573c7241dbe3b00f8d3e255f61a3e576cfa5377e: Status 404 returned error can't find the container with id 6435c53bee8b3c7fbc55eb54573c7241dbe3b00f8d3e255f61a3e576cfa5377e Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.684165 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"519ebfe6-d6c7-4205-b0f6-23be8445474f","Type":"ContainerStarted","Data":"6435c53bee8b3c7fbc55eb54573c7241dbe3b00f8d3e255f61a3e576cfa5377e"} Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.686998 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerStarted","Data":"a6d60e9bbc2223ecc0d027342fcda697c8c9db7e42bb843383d67d95c5eecc77"} Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.687204 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-api" containerID="cri-o://b20ecbe010f123232184433aa0f3ed141020683ad61978aab3360a0cbcb038a8" gracePeriod=30 Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.687805 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-listener" containerID="cri-o://a6d60e9bbc2223ecc0d027342fcda697c8c9db7e42bb843383d67d95c5eecc77" gracePeriod=30 Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.687869 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-notifier" containerID="cri-o://7780b2531c6d8893783c7e5297259c7e85a54e5dd24e914e7531007c03966e07" gracePeriod=30 Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.687917 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-evaluator" containerID="cri-o://2e3560145c9137977db4245281aee12c51cf294927abae2e54788bf2673edba8" gracePeriod=30 Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.708895 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.221721534 podStartE2EDuration="7.708876533s" podCreationTimestamp="2025-11-25 13:01:19 +0000 UTC" firstStartedPulling="2025-11-25 13:01:20.273991577 +0000 UTC m=+2830.383620455" lastFinishedPulling="2025-11-25 13:01:25.761146586 +0000 UTC m=+2835.870775454" observedRunningTime="2025-11-25 13:01:26.705729869 +0000 UTC m=+2836.815358737" watchObservedRunningTime="2025-11-25 13:01:26.708876533 +0000 UTC m=+2836.818505391" Nov 25 13:01:26 crc kubenswrapper[4688]: I1125 13:01:26.753414 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b6f804c-bd87-4f36-9554-405fda9c91b9" path="/var/lib/kubelet/pods/3b6f804c-bd87-4f36-9554-405fda9c91b9/volumes" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.573417 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jpdkw"] Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.575678 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.627570 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-catalog-content\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.627713 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-utilities\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.627741 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5qg\" (UniqueName: \"kubernetes.io/projected/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-kube-api-access-7q5qg\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.633644 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jpdkw"] Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.716419 4688 generic.go:334] "Generic (PLEG): container finished" podID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerID="7780b2531c6d8893783c7e5297259c7e85a54e5dd24e914e7531007c03966e07" exitCode=0 Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.716473 4688 generic.go:334] "Generic (PLEG): container finished" podID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerID="2e3560145c9137977db4245281aee12c51cf294927abae2e54788bf2673edba8" exitCode=0 Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.716483 4688 generic.go:334] "Generic (PLEG): container finished" podID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerID="b20ecbe010f123232184433aa0f3ed141020683ad61978aab3360a0cbcb038a8" exitCode=0 Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.716507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerDied","Data":"7780b2531c6d8893783c7e5297259c7e85a54e5dd24e914e7531007c03966e07"} Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.716560 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerDied","Data":"2e3560145c9137977db4245281aee12c51cf294927abae2e54788bf2673edba8"} Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.716573 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerDied","Data":"b20ecbe010f123232184433aa0f3ed141020683ad61978aab3360a0cbcb038a8"} Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.729204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-catalog-content\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.729317 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-utilities\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.729336 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q5qg\" (UniqueName: \"kubernetes.io/projected/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-kube-api-access-7q5qg\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.729907 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-catalog-content\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.729987 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-utilities\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.754411 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q5qg\" (UniqueName: \"kubernetes.io/projected/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-kube-api-access-7q5qg\") pod \"redhat-operators-jpdkw\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:27 crc kubenswrapper[4688]: I1125 13:01:27.905194 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:28 crc kubenswrapper[4688]: I1125 13:01:28.484707 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jpdkw"] Nov 25 13:01:28 crc kubenswrapper[4688]: I1125 13:01:28.727332 4688 generic.go:334] "Generic (PLEG): container finished" podID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerID="f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295" exitCode=0 Nov 25 13:01:28 crc kubenswrapper[4688]: I1125 13:01:28.727410 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpdkw" event={"ID":"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1","Type":"ContainerDied","Data":"f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295"} Nov 25 13:01:28 crc kubenswrapper[4688]: I1125 13:01:28.727707 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpdkw" event={"ID":"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1","Type":"ContainerStarted","Data":"d7f4e09f226898bc907b29f1ac0a85267fac018f2717d1f7dc146de54fe49e18"} Nov 25 13:01:28 crc kubenswrapper[4688]: I1125 13:01:28.730222 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"519ebfe6-d6c7-4205-b0f6-23be8445474f","Type":"ContainerStarted","Data":"9185b708af56573296fa3287713d6414b7e24e821ecb0e19f810e1582fe31955"} Nov 25 13:01:28 crc kubenswrapper[4688]: I1125 13:01:28.730257 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"519ebfe6-d6c7-4205-b0f6-23be8445474f","Type":"ContainerStarted","Data":"e1e41cc0cdb224ad709ae7f1e63269a8fc352c9c2b2ce3596c82ff23cd19aa08"} Nov 25 13:01:29 crc kubenswrapper[4688]: I1125 13:01:29.742665 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpdkw" event={"ID":"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1","Type":"ContainerStarted","Data":"e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977"} Nov 25 13:01:29 crc kubenswrapper[4688]: I1125 13:01:29.746048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"519ebfe6-d6c7-4205-b0f6-23be8445474f","Type":"ContainerStarted","Data":"1243e03de390051a9574e055a758bd25e37e6c71bf47c45b80c1f62f34aeda74"} Nov 25 13:01:30 crc kubenswrapper[4688]: I1125 13:01:30.785398 4688 generic.go:334] "Generic (PLEG): container finished" podID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerID="e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977" exitCode=0 Nov 25 13:01:30 crc kubenswrapper[4688]: I1125 13:01:30.786320 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpdkw" event={"ID":"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1","Type":"ContainerDied","Data":"e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977"} Nov 25 13:01:31 crc kubenswrapper[4688]: I1125 13:01:31.796447 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpdkw" event={"ID":"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1","Type":"ContainerStarted","Data":"3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d"} Nov 25 13:01:31 crc kubenswrapper[4688]: I1125 13:01:31.799089 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"519ebfe6-d6c7-4205-b0f6-23be8445474f","Type":"ContainerStarted","Data":"a3a0273020ed42bb139ec44057db1d38833f0c6bde7777b18933658a4a02341a"} Nov 25 13:01:31 crc kubenswrapper[4688]: I1125 13:01:31.799297 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 13:01:31 crc kubenswrapper[4688]: I1125 13:01:31.821864 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jpdkw" podStartSLOduration=2.258106796 podStartE2EDuration="4.821840576s" podCreationTimestamp="2025-11-25 13:01:27 +0000 UTC" firstStartedPulling="2025-11-25 13:01:28.728927495 +0000 UTC m=+2838.838556363" lastFinishedPulling="2025-11-25 13:01:31.292661275 +0000 UTC m=+2841.402290143" observedRunningTime="2025-11-25 13:01:31.813207884 +0000 UTC m=+2841.922836772" watchObservedRunningTime="2025-11-25 13:01:31.821840576 +0000 UTC m=+2841.931469454" Nov 25 13:01:31 crc kubenswrapper[4688]: I1125 13:01:31.850809 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.836657346 podStartE2EDuration="6.850785662s" podCreationTimestamp="2025-11-25 13:01:25 +0000 UTC" firstStartedPulling="2025-11-25 13:01:26.553316645 +0000 UTC m=+2836.662945513" lastFinishedPulling="2025-11-25 13:01:30.567444961 +0000 UTC m=+2840.677073829" observedRunningTime="2025-11-25 13:01:31.839358135 +0000 UTC m=+2841.948987013" watchObservedRunningTime="2025-11-25 13:01:31.850785662 +0000 UTC m=+2841.960414540" Nov 25 13:01:37 crc kubenswrapper[4688]: I1125 13:01:37.906369 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:37 crc kubenswrapper[4688]: I1125 13:01:37.906967 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:37 crc kubenswrapper[4688]: I1125 13:01:37.952680 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:38 crc kubenswrapper[4688]: I1125 13:01:38.912341 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:38 crc kubenswrapper[4688]: I1125 13:01:38.969868 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jpdkw"] Nov 25 13:01:40 crc kubenswrapper[4688]: I1125 13:01:40.884341 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jpdkw" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerName="registry-server" containerID="cri-o://3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d" gracePeriod=2 Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.352833 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.504375 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-utilities\") pod \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.504508 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-catalog-content\") pod \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.504543 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q5qg\" (UniqueName: \"kubernetes.io/projected/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-kube-api-access-7q5qg\") pod \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\" (UID: \"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1\") " Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.505440 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-utilities" (OuterVolumeSpecName: "utilities") pod "afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" (UID: "afe2dd42-95b4-4b24-bc87-3d9a1f3163f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.515681 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-kube-api-access-7q5qg" (OuterVolumeSpecName: "kube-api-access-7q5qg") pod "afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" (UID: "afe2dd42-95b4-4b24-bc87-3d9a1f3163f1"). InnerVolumeSpecName "kube-api-access-7q5qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.607281 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" (UID: "afe2dd42-95b4-4b24-bc87-3d9a1f3163f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.607887 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.607956 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q5qg\" (UniqueName: \"kubernetes.io/projected/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-kube-api-access-7q5qg\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.607984 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.896096 4688 generic.go:334] "Generic (PLEG): container finished" podID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerID="3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d" exitCode=0 Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.896133 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpdkw" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.896174 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpdkw" event={"ID":"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1","Type":"ContainerDied","Data":"3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d"} Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.896586 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpdkw" event={"ID":"afe2dd42-95b4-4b24-bc87-3d9a1f3163f1","Type":"ContainerDied","Data":"d7f4e09f226898bc907b29f1ac0a85267fac018f2717d1f7dc146de54fe49e18"} Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.896607 4688 scope.go:117] "RemoveContainer" containerID="3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.939703 4688 scope.go:117] "RemoveContainer" containerID="e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977" Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.947575 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jpdkw"] Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.960566 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jpdkw"] Nov 25 13:01:41 crc kubenswrapper[4688]: I1125 13:01:41.968470 4688 scope.go:117] "RemoveContainer" containerID="f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295" Nov 25 13:01:42 crc kubenswrapper[4688]: I1125 13:01:42.017419 4688 scope.go:117] "RemoveContainer" containerID="3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d" Nov 25 13:01:42 crc kubenswrapper[4688]: E1125 13:01:42.018038 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d\": container with ID starting with 3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d not found: ID does not exist" containerID="3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d" Nov 25 13:01:42 crc kubenswrapper[4688]: I1125 13:01:42.018092 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d"} err="failed to get container status \"3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d\": rpc error: code = NotFound desc = could not find container \"3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d\": container with ID starting with 3a0605761d4a1c407354ac2fb8807b0637056404d6a82615e63a888482af2c2d not found: ID does not exist" Nov 25 13:01:42 crc kubenswrapper[4688]: I1125 13:01:42.018127 4688 scope.go:117] "RemoveContainer" containerID="e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977" Nov 25 13:01:42 crc kubenswrapper[4688]: E1125 13:01:42.018471 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977\": container with ID starting with e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977 not found: ID does not exist" containerID="e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977" Nov 25 13:01:42 crc kubenswrapper[4688]: I1125 13:01:42.018691 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977"} err="failed to get container status \"e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977\": rpc error: code = NotFound desc = could not find container \"e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977\": container with ID starting with e467995a43d193fba17a6708443c609a6610d0c76aee5f33cc6158cf38716977 not found: ID does not exist" Nov 25 13:01:42 crc kubenswrapper[4688]: I1125 13:01:42.018734 4688 scope.go:117] "RemoveContainer" containerID="f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295" Nov 25 13:01:42 crc kubenswrapper[4688]: E1125 13:01:42.019041 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295\": container with ID starting with f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295 not found: ID does not exist" containerID="f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295" Nov 25 13:01:42 crc kubenswrapper[4688]: I1125 13:01:42.019071 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295"} err="failed to get container status \"f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295\": rpc error: code = NotFound desc = could not find container \"f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295\": container with ID starting with f89e37f17ee3684e5749cd6161577c9f8574eebc4f5dee92b33d7b07c71a9295 not found: ID does not exist" Nov 25 13:01:42 crc kubenswrapper[4688]: I1125 13:01:42.751006 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" path="/var/lib/kubelet/pods/afe2dd42-95b4-4b24-bc87-3d9a1f3163f1/volumes" Nov 25 13:01:47 crc kubenswrapper[4688]: I1125 13:01:47.853724 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:01:47 crc kubenswrapper[4688]: I1125 13:01:47.854371 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:01:56 crc kubenswrapper[4688]: I1125 13:01:56.032695 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.052433 4688 generic.go:334] "Generic (PLEG): container finished" podID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerID="a6d60e9bbc2223ecc0d027342fcda697c8c9db7e42bb843383d67d95c5eecc77" exitCode=137 Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.052823 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerDied","Data":"a6d60e9bbc2223ecc0d027342fcda697c8c9db7e42bb843383d67d95c5eecc77"} Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.233459 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.343490 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhgb5\" (UniqueName: \"kubernetes.io/projected/f8c4bfb9-b922-4796-84d3-7790bb92901b-kube-api-access-xhgb5\") pod \"f8c4bfb9-b922-4796-84d3-7790bb92901b\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.344708 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-config-data\") pod \"f8c4bfb9-b922-4796-84d3-7790bb92901b\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.344738 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-combined-ca-bundle\") pod \"f8c4bfb9-b922-4796-84d3-7790bb92901b\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.344806 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-scripts\") pod \"f8c4bfb9-b922-4796-84d3-7790bb92901b\" (UID: \"f8c4bfb9-b922-4796-84d3-7790bb92901b\") " Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.364040 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-scripts" (OuterVolumeSpecName: "scripts") pod "f8c4bfb9-b922-4796-84d3-7790bb92901b" (UID: "f8c4bfb9-b922-4796-84d3-7790bb92901b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.364085 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c4bfb9-b922-4796-84d3-7790bb92901b-kube-api-access-xhgb5" (OuterVolumeSpecName: "kube-api-access-xhgb5") pod "f8c4bfb9-b922-4796-84d3-7790bb92901b" (UID: "f8c4bfb9-b922-4796-84d3-7790bb92901b"). InnerVolumeSpecName "kube-api-access-xhgb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.449441 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhgb5\" (UniqueName: \"kubernetes.io/projected/f8c4bfb9-b922-4796-84d3-7790bb92901b-kube-api-access-xhgb5\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.449684 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.463281 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-config-data" (OuterVolumeSpecName: "config-data") pod "f8c4bfb9-b922-4796-84d3-7790bb92901b" (UID: "f8c4bfb9-b922-4796-84d3-7790bb92901b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.481880 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8c4bfb9-b922-4796-84d3-7790bb92901b" (UID: "f8c4bfb9-b922-4796-84d3-7790bb92901b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.551159 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:57 crc kubenswrapper[4688]: I1125 13:01:57.551193 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c4bfb9-b922-4796-84d3-7790bb92901b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.063660 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f8c4bfb9-b922-4796-84d3-7790bb92901b","Type":"ContainerDied","Data":"bcb87ba54db2be4c53fb87f2aa531dbb29fc84c0f9a62714ef5837b7c8ba033e"} Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.064288 4688 scope.go:117] "RemoveContainer" containerID="a6d60e9bbc2223ecc0d027342fcda697c8c9db7e42bb843383d67d95c5eecc77" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.063948 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.095991 4688 scope.go:117] "RemoveContainer" containerID="7780b2531c6d8893783c7e5297259c7e85a54e5dd24e914e7531007c03966e07" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.101224 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.113321 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.135442 4688 scope.go:117] "RemoveContainer" containerID="2e3560145c9137977db4245281aee12c51cf294927abae2e54788bf2673edba8" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.165836 4688 scope.go:117] "RemoveContainer" containerID="b20ecbe010f123232184433aa0f3ed141020683ad61978aab3360a0cbcb038a8" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.166447 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:58 crc kubenswrapper[4688]: E1125 13:01:58.167716 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerName="extract-utilities" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.167749 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerName="extract-utilities" Nov 25 13:01:58 crc kubenswrapper[4688]: E1125 13:01:58.167795 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-notifier" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.167804 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-notifier" Nov 25 13:01:58 crc kubenswrapper[4688]: E1125 13:01:58.167830 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-api" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.167842 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-api" Nov 25 13:01:58 crc kubenswrapper[4688]: E1125 13:01:58.167870 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerName="registry-server" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.167881 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerName="registry-server" Nov 25 13:01:58 crc kubenswrapper[4688]: E1125 13:01:58.167897 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-evaluator" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.167905 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-evaluator" Nov 25 13:01:58 crc kubenswrapper[4688]: E1125 13:01:58.167924 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerName="extract-content" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.167932 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerName="extract-content" Nov 25 13:01:58 crc kubenswrapper[4688]: E1125 13:01:58.167969 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-listener" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.167977 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-listener" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.168426 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-listener" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.168464 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-api" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.168495 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe2dd42-95b4-4b24-bc87-3d9a1f3163f1" containerName="registry-server" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.168535 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-notifier" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.168558 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" containerName="aodh-evaluator" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.171450 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.177129 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tn6ss" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.177318 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.177409 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.177517 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.177706 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.183847 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.263938 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-scripts\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.264302 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4nj\" (UniqueName: \"kubernetes.io/projected/ba2920a0-7f0c-464e-aa20-d29a32ed6850-kube-api-access-qf4nj\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.264335 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-config-data\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.264356 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-internal-tls-certs\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.264459 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.264561 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-public-tls-certs\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.366001 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.366077 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-public-tls-certs\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.366157 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-scripts\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.366184 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf4nj\" (UniqueName: \"kubernetes.io/projected/ba2920a0-7f0c-464e-aa20-d29a32ed6850-kube-api-access-qf4nj\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.366206 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-config-data\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.366224 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-internal-tls-certs\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.372008 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.372130 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-internal-tls-certs\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.372810 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-config-data\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.373630 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-public-tls-certs\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.380546 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-scripts\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.388639 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf4nj\" (UniqueName: \"kubernetes.io/projected/ba2920a0-7f0c-464e-aa20-d29a32ed6850-kube-api-access-qf4nj\") pod \"aodh-0\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.527408 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.755701 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c4bfb9-b922-4796-84d3-7790bb92901b" path="/var/lib/kubelet/pods/f8c4bfb9-b922-4796-84d3-7790bb92901b/volumes" Nov 25 13:01:58 crc kubenswrapper[4688]: I1125 13:01:58.985477 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:01:59 crc kubenswrapper[4688]: I1125 13:01:59.074343 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerStarted","Data":"c50c24bc241277613bb57062d84cab349173f0aeae20943f84ac95fae84d6027"} Nov 25 13:02:00 crc kubenswrapper[4688]: I1125 13:02:00.090173 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerStarted","Data":"a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e"} Nov 25 13:02:01 crc kubenswrapper[4688]: I1125 13:02:01.110147 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerStarted","Data":"aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135"} Nov 25 13:02:01 crc kubenswrapper[4688]: I1125 13:02:01.110705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerStarted","Data":"9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12"} Nov 25 13:02:02 crc kubenswrapper[4688]: I1125 13:02:02.121855 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerStarted","Data":"18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b"} Nov 25 13:02:18 crc kubenswrapper[4688]: I1125 13:02:18.301276 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:02:18 crc kubenswrapper[4688]: I1125 13:02:18.302033 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:02:18 crc kubenswrapper[4688]: I1125 13:02:18.302478 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 13:02:18 crc kubenswrapper[4688]: I1125 13:02:18.303186 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7f6010fb0df9c2e667539baa0196010cf78e5e21c863cb43d0f9c102bd523c5"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 13:02:18 crc kubenswrapper[4688]: I1125 13:02:18.303263 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://d7f6010fb0df9c2e667539baa0196010cf78e5e21c863cb43d0f9c102bd523c5" gracePeriod=600 Nov 25 13:02:19 crc kubenswrapper[4688]: I1125 13:02:19.364841 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="d7f6010fb0df9c2e667539baa0196010cf78e5e21c863cb43d0f9c102bd523c5" exitCode=0 Nov 25 13:02:19 crc kubenswrapper[4688]: I1125 13:02:19.364893 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"d7f6010fb0df9c2e667539baa0196010cf78e5e21c863cb43d0f9c102bd523c5"} Nov 25 13:02:19 crc kubenswrapper[4688]: I1125 13:02:19.366887 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60"} Nov 25 13:02:19 crc kubenswrapper[4688]: I1125 13:02:19.366951 4688 scope.go:117] "RemoveContainer" containerID="0658ee834b58d735e7aea8109fbb4709ac9fe2e5ddb9a92c19a712d5e6050ca6" Nov 25 13:02:19 crc kubenswrapper[4688]: I1125 13:02:19.396740 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=18.701461102 podStartE2EDuration="21.396722531s" podCreationTimestamp="2025-11-25 13:01:58 +0000 UTC" firstStartedPulling="2025-11-25 13:01:58.992608812 +0000 UTC m=+2869.102237680" lastFinishedPulling="2025-11-25 13:02:01.687870231 +0000 UTC m=+2871.797499109" observedRunningTime="2025-11-25 13:02:02.159777762 +0000 UTC m=+2872.269406630" watchObservedRunningTime="2025-11-25 13:02:19.396722531 +0000 UTC m=+2889.506351399" Nov 25 13:03:04 crc kubenswrapper[4688]: I1125 13:03:04.046096 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/1.log" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.778097 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bsr5k"] Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.780570 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.793428 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsr5k"] Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.831737 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtdw\" (UniqueName: \"kubernetes.io/projected/2febb50e-8fb2-4379-88de-1cf775e3f12e-kube-api-access-6gtdw\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.831895 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-catalog-content\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.831924 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-utilities\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.853368 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.853417 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.934157 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-catalog-content\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.934204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-utilities\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.934306 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gtdw\" (UniqueName: \"kubernetes.io/projected/2febb50e-8fb2-4379-88de-1cf775e3f12e-kube-api-access-6gtdw\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.934653 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-catalog-content\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.935046 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-utilities\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.955258 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gtdw\" (UniqueName: \"kubernetes.io/projected/2febb50e-8fb2-4379-88de-1cf775e3f12e-kube-api-access-6gtdw\") pod \"redhat-marketplace-bsr5k\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.969592 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qjqw6"] Nov 25 13:04:47 crc kubenswrapper[4688]: I1125 13:04:47.979893 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.000078 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjqw6"] Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.035769 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-catalog-content\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.035813 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-utilities\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.035903 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kckm\" (UniqueName: \"kubernetes.io/projected/90944524-30da-40f1-b8e6-e96a84862d72-kube-api-access-2kckm\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.112172 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.137372 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kckm\" (UniqueName: \"kubernetes.io/projected/90944524-30da-40f1-b8e6-e96a84862d72-kube-api-access-2kckm\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.137518 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-catalog-content\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.137565 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-utilities\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.138142 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-catalog-content\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.141753 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-utilities\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:48 crc kubenswrapper[4688]: I1125 13:04:48.157975 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kckm\" (UniqueName: \"kubernetes.io/projected/90944524-30da-40f1-b8e6-e96a84862d72-kube-api-access-2kckm\") pod \"certified-operators-qjqw6\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:50 crc kubenswrapper[4688]: I1125 13:04:48.317959 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:50 crc kubenswrapper[4688]: W1125 13:04:50.946865 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2febb50e_8fb2_4379_88de_1cf775e3f12e.slice/crio-be5f71ddd922491ebddb1956532c71296c93983b4a690256b7f796a8ccf3d444 WatchSource:0}: Error finding container be5f71ddd922491ebddb1956532c71296c93983b4a690256b7f796a8ccf3d444: Status 404 returned error can't find the container with id be5f71ddd922491ebddb1956532c71296c93983b4a690256b7f796a8ccf3d444 Nov 25 13:04:50 crc kubenswrapper[4688]: I1125 13:04:50.947308 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsr5k"] Nov 25 13:04:50 crc kubenswrapper[4688]: I1125 13:04:50.977105 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsr5k" event={"ID":"2febb50e-8fb2-4379-88de-1cf775e3f12e","Type":"ContainerStarted","Data":"be5f71ddd922491ebddb1956532c71296c93983b4a690256b7f796a8ccf3d444"} Nov 25 13:04:51 crc kubenswrapper[4688]: I1125 13:04:51.014696 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjqw6"] Nov 25 13:04:51 crc kubenswrapper[4688]: W1125 13:04:51.023743 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90944524_30da_40f1_b8e6_e96a84862d72.slice/crio-31da875b209c10c87899395fbf27de8932a0d0eac20637f7772db52876950ddd WatchSource:0}: Error finding container 31da875b209c10c87899395fbf27de8932a0d0eac20637f7772db52876950ddd: Status 404 returned error can't find the container with id 31da875b209c10c87899395fbf27de8932a0d0eac20637f7772db52876950ddd Nov 25 13:04:51 crc kubenswrapper[4688]: I1125 13:04:51.986352 4688 generic.go:334] "Generic (PLEG): container finished" podID="90944524-30da-40f1-b8e6-e96a84862d72" containerID="55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb" exitCode=0 Nov 25 13:04:51 crc kubenswrapper[4688]: I1125 13:04:51.986419 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqw6" event={"ID":"90944524-30da-40f1-b8e6-e96a84862d72","Type":"ContainerDied","Data":"55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb"} Nov 25 13:04:51 crc kubenswrapper[4688]: I1125 13:04:51.986835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqw6" event={"ID":"90944524-30da-40f1-b8e6-e96a84862d72","Type":"ContainerStarted","Data":"31da875b209c10c87899395fbf27de8932a0d0eac20637f7772db52876950ddd"} Nov 25 13:04:51 crc kubenswrapper[4688]: I1125 13:04:51.989737 4688 generic.go:334] "Generic (PLEG): container finished" podID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerID="de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a" exitCode=0 Nov 25 13:04:51 crc kubenswrapper[4688]: I1125 13:04:51.989775 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsr5k" event={"ID":"2febb50e-8fb2-4379-88de-1cf775e3f12e","Type":"ContainerDied","Data":"de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a"} Nov 25 13:04:54 crc kubenswrapper[4688]: I1125 13:04:54.012417 4688 generic.go:334] "Generic (PLEG): container finished" podID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerID="246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0" exitCode=0 Nov 25 13:04:54 crc kubenswrapper[4688]: I1125 13:04:54.012479 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsr5k" event={"ID":"2febb50e-8fb2-4379-88de-1cf775e3f12e","Type":"ContainerDied","Data":"246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0"} Nov 25 13:04:54 crc kubenswrapper[4688]: I1125 13:04:54.015506 4688 generic.go:334] "Generic (PLEG): container finished" podID="90944524-30da-40f1-b8e6-e96a84862d72" containerID="aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003" exitCode=0 Nov 25 13:04:54 crc kubenswrapper[4688]: I1125 13:04:54.015578 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqw6" event={"ID":"90944524-30da-40f1-b8e6-e96a84862d72","Type":"ContainerDied","Data":"aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003"} Nov 25 13:04:56 crc kubenswrapper[4688]: I1125 13:04:56.046904 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqw6" event={"ID":"90944524-30da-40f1-b8e6-e96a84862d72","Type":"ContainerStarted","Data":"0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4"} Nov 25 13:04:56 crc kubenswrapper[4688]: I1125 13:04:56.052645 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsr5k" event={"ID":"2febb50e-8fb2-4379-88de-1cf775e3f12e","Type":"ContainerStarted","Data":"ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8"} Nov 25 13:04:56 crc kubenswrapper[4688]: I1125 13:04:56.078294 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qjqw6" podStartSLOduration=6.436140362 podStartE2EDuration="9.078274475s" podCreationTimestamp="2025-11-25 13:04:47 +0000 UTC" firstStartedPulling="2025-11-25 13:04:51.988778952 +0000 UTC m=+3042.098407820" lastFinishedPulling="2025-11-25 13:04:54.630913065 +0000 UTC m=+3044.740541933" observedRunningTime="2025-11-25 13:04:56.068550668 +0000 UTC m=+3046.178179546" watchObservedRunningTime="2025-11-25 13:04:56.078274475 +0000 UTC m=+3046.187903343" Nov 25 13:04:56 crc kubenswrapper[4688]: I1125 13:04:56.096358 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bsr5k" podStartSLOduration=6.438912756 podStartE2EDuration="9.096341619s" podCreationTimestamp="2025-11-25 13:04:47 +0000 UTC" firstStartedPulling="2025-11-25 13:04:51.991274459 +0000 UTC m=+3042.100903327" lastFinishedPulling="2025-11-25 13:04:54.648703322 +0000 UTC m=+3044.758332190" observedRunningTime="2025-11-25 13:04:56.088171245 +0000 UTC m=+3046.197800113" watchObservedRunningTime="2025-11-25 13:04:56.096341619 +0000 UTC m=+3046.205970487" Nov 25 13:04:58 crc kubenswrapper[4688]: I1125 13:04:58.112900 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:58 crc kubenswrapper[4688]: I1125 13:04:58.113278 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:58 crc kubenswrapper[4688]: I1125 13:04:58.187303 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:04:58 crc kubenswrapper[4688]: I1125 13:04:58.319149 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:58 crc kubenswrapper[4688]: I1125 13:04:58.319396 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:04:58 crc kubenswrapper[4688]: I1125 13:04:58.361327 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:05:04 crc kubenswrapper[4688]: I1125 13:05:04.837332 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/2.log" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.171085 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.218782 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsr5k"] Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.219051 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bsr5k" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerName="registry-server" containerID="cri-o://ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8" gracePeriod=2 Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.371895 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.731148 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.826120 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-catalog-content\") pod \"2febb50e-8fb2-4379-88de-1cf775e3f12e\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.826243 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-utilities\") pod \"2febb50e-8fb2-4379-88de-1cf775e3f12e\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.826272 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gtdw\" (UniqueName: \"kubernetes.io/projected/2febb50e-8fb2-4379-88de-1cf775e3f12e-kube-api-access-6gtdw\") pod \"2febb50e-8fb2-4379-88de-1cf775e3f12e\" (UID: \"2febb50e-8fb2-4379-88de-1cf775e3f12e\") " Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.827184 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-utilities" (OuterVolumeSpecName: "utilities") pod "2febb50e-8fb2-4379-88de-1cf775e3f12e" (UID: "2febb50e-8fb2-4379-88de-1cf775e3f12e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.828921 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.836937 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2febb50e-8fb2-4379-88de-1cf775e3f12e-kube-api-access-6gtdw" (OuterVolumeSpecName: "kube-api-access-6gtdw") pod "2febb50e-8fb2-4379-88de-1cf775e3f12e" (UID: "2febb50e-8fb2-4379-88de-1cf775e3f12e"). InnerVolumeSpecName "kube-api-access-6gtdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.843098 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2febb50e-8fb2-4379-88de-1cf775e3f12e" (UID: "2febb50e-8fb2-4379-88de-1cf775e3f12e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.930259 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2febb50e-8fb2-4379-88de-1cf775e3f12e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:08 crc kubenswrapper[4688]: I1125 13:05:08.930297 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gtdw\" (UniqueName: \"kubernetes.io/projected/2febb50e-8fb2-4379-88de-1cf775e3f12e-kube-api-access-6gtdw\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.196578 4688 generic.go:334] "Generic (PLEG): container finished" podID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerID="ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8" exitCode=0 Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.196623 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsr5k" event={"ID":"2febb50e-8fb2-4379-88de-1cf775e3f12e","Type":"ContainerDied","Data":"ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8"} Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.196655 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsr5k" event={"ID":"2febb50e-8fb2-4379-88de-1cf775e3f12e","Type":"ContainerDied","Data":"be5f71ddd922491ebddb1956532c71296c93983b4a690256b7f796a8ccf3d444"} Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.196688 4688 scope.go:117] "RemoveContainer" containerID="ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.197658 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsr5k" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.228959 4688 scope.go:117] "RemoveContainer" containerID="246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.248200 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsr5k"] Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.258124 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsr5k"] Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.264692 4688 scope.go:117] "RemoveContainer" containerID="de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.310957 4688 scope.go:117] "RemoveContainer" containerID="ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8" Nov 25 13:05:09 crc kubenswrapper[4688]: E1125 13:05:09.311479 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8\": container with ID starting with ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8 not found: ID does not exist" containerID="ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.311536 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8"} err="failed to get container status \"ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8\": rpc error: code = NotFound desc = could not find container \"ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8\": container with ID starting with ba69d5ebdf196cd2be6ee0466de2de97073ec2576a6e04c98840e017847d99c8 not found: ID does not exist" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.311565 4688 scope.go:117] "RemoveContainer" containerID="246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0" Nov 25 13:05:09 crc kubenswrapper[4688]: E1125 13:05:09.311866 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0\": container with ID starting with 246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0 not found: ID does not exist" containerID="246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.311913 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0"} err="failed to get container status \"246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0\": rpc error: code = NotFound desc = could not find container \"246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0\": container with ID starting with 246f2d0818d59876d660f05b7dce3c4585eeb7fb027c8cbd319d73ebd03119b0 not found: ID does not exist" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.311931 4688 scope.go:117] "RemoveContainer" containerID="de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a" Nov 25 13:05:09 crc kubenswrapper[4688]: E1125 13:05:09.312267 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a\": container with ID starting with de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a not found: ID does not exist" containerID="de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a" Nov 25 13:05:09 crc kubenswrapper[4688]: I1125 13:05:09.312294 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a"} err="failed to get container status \"de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a\": rpc error: code = NotFound desc = could not find container \"de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a\": container with ID starting with de3bc2949230e96edc5fe945577959292a3a4bfa12f607e9433724eb3635f53a not found: ID does not exist" Nov 25 13:05:10 crc kubenswrapper[4688]: I1125 13:05:10.611758 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjqw6"] Nov 25 13:05:10 crc kubenswrapper[4688]: I1125 13:05:10.611983 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qjqw6" podUID="90944524-30da-40f1-b8e6-e96a84862d72" containerName="registry-server" containerID="cri-o://0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4" gracePeriod=2 Nov 25 13:05:10 crc kubenswrapper[4688]: I1125 13:05:10.751759 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" path="/var/lib/kubelet/pods/2febb50e-8fb2-4379-88de-1cf775e3f12e/volumes" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.132406 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.220975 4688 generic.go:334] "Generic (PLEG): container finished" podID="90944524-30da-40f1-b8e6-e96a84862d72" containerID="0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4" exitCode=0 Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.221034 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqw6" event={"ID":"90944524-30da-40f1-b8e6-e96a84862d72","Type":"ContainerDied","Data":"0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4"} Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.221062 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjqw6" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.221100 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjqw6" event={"ID":"90944524-30da-40f1-b8e6-e96a84862d72","Type":"ContainerDied","Data":"31da875b209c10c87899395fbf27de8932a0d0eac20637f7772db52876950ddd"} Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.221121 4688 scope.go:117] "RemoveContainer" containerID="0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.252645 4688 scope.go:117] "RemoveContainer" containerID="aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.272095 4688 scope.go:117] "RemoveContainer" containerID="55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.277158 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-catalog-content\") pod \"90944524-30da-40f1-b8e6-e96a84862d72\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.277196 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-utilities\") pod \"90944524-30da-40f1-b8e6-e96a84862d72\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.277255 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kckm\" (UniqueName: \"kubernetes.io/projected/90944524-30da-40f1-b8e6-e96a84862d72-kube-api-access-2kckm\") pod \"90944524-30da-40f1-b8e6-e96a84862d72\" (UID: \"90944524-30da-40f1-b8e6-e96a84862d72\") " Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.278621 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-utilities" (OuterVolumeSpecName: "utilities") pod "90944524-30da-40f1-b8e6-e96a84862d72" (UID: "90944524-30da-40f1-b8e6-e96a84862d72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.278975 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.283858 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90944524-30da-40f1-b8e6-e96a84862d72-kube-api-access-2kckm" (OuterVolumeSpecName: "kube-api-access-2kckm") pod "90944524-30da-40f1-b8e6-e96a84862d72" (UID: "90944524-30da-40f1-b8e6-e96a84862d72"). InnerVolumeSpecName "kube-api-access-2kckm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.324980 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90944524-30da-40f1-b8e6-e96a84862d72" (UID: "90944524-30da-40f1-b8e6-e96a84862d72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.358443 4688 scope.go:117] "RemoveContainer" containerID="0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4" Nov 25 13:05:11 crc kubenswrapper[4688]: E1125 13:05:11.359466 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4\": container with ID starting with 0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4 not found: ID does not exist" containerID="0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.359617 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4"} err="failed to get container status \"0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4\": rpc error: code = NotFound desc = could not find container \"0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4\": container with ID starting with 0f850b44b67d9164b3dd456fdac65bf2076691a0903e086ab6c569047e8a94b4 not found: ID does not exist" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.359662 4688 scope.go:117] "RemoveContainer" containerID="aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003" Nov 25 13:05:11 crc kubenswrapper[4688]: E1125 13:05:11.360064 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003\": container with ID starting with aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003 not found: ID does not exist" containerID="aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.360103 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003"} err="failed to get container status \"aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003\": rpc error: code = NotFound desc = could not find container \"aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003\": container with ID starting with aad4a00edb323b5c24442d0ed9cd71da96818e5a215718a013baa82a8890f003 not found: ID does not exist" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.360130 4688 scope.go:117] "RemoveContainer" containerID="55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb" Nov 25 13:05:11 crc kubenswrapper[4688]: E1125 13:05:11.360512 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb\": container with ID starting with 55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb not found: ID does not exist" containerID="55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.360598 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb"} err="failed to get container status \"55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb\": rpc error: code = NotFound desc = could not find container \"55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb\": container with ID starting with 55a9ccd82eed056c36ba02b99751ce971f07fc76e16a88fa0190685023a494eb not found: ID does not exist" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.380848 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90944524-30da-40f1-b8e6-e96a84862d72-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.380888 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kckm\" (UniqueName: \"kubernetes.io/projected/90944524-30da-40f1-b8e6-e96a84862d72-kube-api-access-2kckm\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.555288 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjqw6"] Nov 25 13:05:11 crc kubenswrapper[4688]: I1125 13:05:11.564631 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qjqw6"] Nov 25 13:05:12 crc kubenswrapper[4688]: I1125 13:05:12.751021 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90944524-30da-40f1-b8e6-e96a84862d72" path="/var/lib/kubelet/pods/90944524-30da-40f1-b8e6-e96a84862d72/volumes" Nov 25 13:05:17 crc kubenswrapper[4688]: I1125 13:05:17.853724 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:05:17 crc kubenswrapper[4688]: I1125 13:05:17.854380 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.259966 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw"] Nov 25 13:05:19 crc kubenswrapper[4688]: E1125 13:05:19.260893 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90944524-30da-40f1-b8e6-e96a84862d72" containerName="extract-content" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.260911 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="90944524-30da-40f1-b8e6-e96a84862d72" containerName="extract-content" Nov 25 13:05:19 crc kubenswrapper[4688]: E1125 13:05:19.260943 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerName="registry-server" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.260953 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerName="registry-server" Nov 25 13:05:19 crc kubenswrapper[4688]: E1125 13:05:19.260981 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerName="extract-utilities" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.260990 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerName="extract-utilities" Nov 25 13:05:19 crc kubenswrapper[4688]: E1125 13:05:19.261009 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90944524-30da-40f1-b8e6-e96a84862d72" containerName="extract-utilities" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.261023 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="90944524-30da-40f1-b8e6-e96a84862d72" containerName="extract-utilities" Nov 25 13:05:19 crc kubenswrapper[4688]: E1125 13:05:19.261049 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerName="extract-content" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.261056 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerName="extract-content" Nov 25 13:05:19 crc kubenswrapper[4688]: E1125 13:05:19.261091 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90944524-30da-40f1-b8e6-e96a84862d72" containerName="registry-server" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.261100 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="90944524-30da-40f1-b8e6-e96a84862d72" containerName="registry-server" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.261352 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="90944524-30da-40f1-b8e6-e96a84862d72" containerName="registry-server" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.261366 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2febb50e-8fb2-4379-88de-1cf775e3f12e" containerName="registry-server" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.263079 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.265951 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.274323 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw"] Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.329327 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.329659 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.329702 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d6gb\" (UniqueName: \"kubernetes.io/projected/f264cf40-eeb7-48d8-93d5-af0c6953390e-kube-api-access-7d6gb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.431387 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.431565 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.431584 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d6gb\" (UniqueName: \"kubernetes.io/projected/f264cf40-eeb7-48d8-93d5-af0c6953390e-kube-api-access-7d6gb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.432006 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.432249 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.454194 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d6gb\" (UniqueName: \"kubernetes.io/projected/f264cf40-eeb7-48d8-93d5-af0c6953390e-kube-api-access-7d6gb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:19 crc kubenswrapper[4688]: I1125 13:05:19.583931 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:20 crc kubenswrapper[4688]: I1125 13:05:20.067859 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw"] Nov 25 13:05:20 crc kubenswrapper[4688]: I1125 13:05:20.312104 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" event={"ID":"f264cf40-eeb7-48d8-93d5-af0c6953390e","Type":"ContainerStarted","Data":"01be6de8c7b99963cbd2ed380292204adc0d8285d5c1b547cc471d77ee7951ef"} Nov 25 13:05:20 crc kubenswrapper[4688]: I1125 13:05:20.312418 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" event={"ID":"f264cf40-eeb7-48d8-93d5-af0c6953390e","Type":"ContainerStarted","Data":"6a9ad2e2041ffcfe3147afb9c7e6bddd19f0feb3d8197bebb9fa7c215518d7fc"} Nov 25 13:05:21 crc kubenswrapper[4688]: I1125 13:05:21.325883 4688 generic.go:334] "Generic (PLEG): container finished" podID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerID="01be6de8c7b99963cbd2ed380292204adc0d8285d5c1b547cc471d77ee7951ef" exitCode=0 Nov 25 13:05:21 crc kubenswrapper[4688]: I1125 13:05:21.325934 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" event={"ID":"f264cf40-eeb7-48d8-93d5-af0c6953390e","Type":"ContainerDied","Data":"01be6de8c7b99963cbd2ed380292204adc0d8285d5c1b547cc471d77ee7951ef"} Nov 25 13:05:23 crc kubenswrapper[4688]: I1125 13:05:23.348734 4688 generic.go:334] "Generic (PLEG): container finished" podID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerID="c794c94fdbeb299b1ed0a4a34aee940c6221759b2c1fe318bf9760d17b2ed3f2" exitCode=0 Nov 25 13:05:23 crc kubenswrapper[4688]: I1125 13:05:23.348827 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" event={"ID":"f264cf40-eeb7-48d8-93d5-af0c6953390e","Type":"ContainerDied","Data":"c794c94fdbeb299b1ed0a4a34aee940c6221759b2c1fe318bf9760d17b2ed3f2"} Nov 25 13:05:24 crc kubenswrapper[4688]: I1125 13:05:24.362476 4688 generic.go:334] "Generic (PLEG): container finished" podID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerID="766aa54e27d198bf12ccf3cf8342103b791bd47b750c561c9659f026d841266d" exitCode=0 Nov 25 13:05:24 crc kubenswrapper[4688]: I1125 13:05:24.362547 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" event={"ID":"f264cf40-eeb7-48d8-93d5-af0c6953390e","Type":"ContainerDied","Data":"766aa54e27d198bf12ccf3cf8342103b791bd47b750c561c9659f026d841266d"} Nov 25 13:05:25 crc kubenswrapper[4688]: I1125 13:05:25.753026 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:25 crc kubenswrapper[4688]: I1125 13:05:25.861011 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d6gb\" (UniqueName: \"kubernetes.io/projected/f264cf40-eeb7-48d8-93d5-af0c6953390e-kube-api-access-7d6gb\") pod \"f264cf40-eeb7-48d8-93d5-af0c6953390e\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " Nov 25 13:05:25 crc kubenswrapper[4688]: I1125 13:05:25.861203 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-util\") pod \"f264cf40-eeb7-48d8-93d5-af0c6953390e\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " Nov 25 13:05:25 crc kubenswrapper[4688]: I1125 13:05:25.861225 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-bundle\") pod \"f264cf40-eeb7-48d8-93d5-af0c6953390e\" (UID: \"f264cf40-eeb7-48d8-93d5-af0c6953390e\") " Nov 25 13:05:25 crc kubenswrapper[4688]: I1125 13:05:25.865597 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-bundle" (OuterVolumeSpecName: "bundle") pod "f264cf40-eeb7-48d8-93d5-af0c6953390e" (UID: "f264cf40-eeb7-48d8-93d5-af0c6953390e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:05:25 crc kubenswrapper[4688]: I1125 13:05:25.874941 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f264cf40-eeb7-48d8-93d5-af0c6953390e-kube-api-access-7d6gb" (OuterVolumeSpecName: "kube-api-access-7d6gb") pod "f264cf40-eeb7-48d8-93d5-af0c6953390e" (UID: "f264cf40-eeb7-48d8-93d5-af0c6953390e"). InnerVolumeSpecName "kube-api-access-7d6gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:05:25 crc kubenswrapper[4688]: I1125 13:05:25.963915 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:25 crc kubenswrapper[4688]: I1125 13:05:25.963946 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d6gb\" (UniqueName: \"kubernetes.io/projected/f264cf40-eeb7-48d8-93d5-af0c6953390e-kube-api-access-7d6gb\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:26 crc kubenswrapper[4688]: I1125 13:05:26.154307 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-util" (OuterVolumeSpecName: "util") pod "f264cf40-eeb7-48d8-93d5-af0c6953390e" (UID: "f264cf40-eeb7-48d8-93d5-af0c6953390e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:05:26 crc kubenswrapper[4688]: I1125 13:05:26.168020 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f264cf40-eeb7-48d8-93d5-af0c6953390e-util\") on node \"crc\" DevicePath \"\"" Nov 25 13:05:26 crc kubenswrapper[4688]: I1125 13:05:26.392097 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" event={"ID":"f264cf40-eeb7-48d8-93d5-af0c6953390e","Type":"ContainerDied","Data":"6a9ad2e2041ffcfe3147afb9c7e6bddd19f0feb3d8197bebb9fa7c215518d7fc"} Nov 25 13:05:26 crc kubenswrapper[4688]: I1125 13:05:26.392142 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9ad2e2041ffcfe3147afb9c7e6bddd19f0feb3d8197bebb9fa7c215518d7fc" Nov 25 13:05:26 crc kubenswrapper[4688]: I1125 13:05:26.392449 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.910279 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5"] Nov 25 13:05:37 crc kubenswrapper[4688]: E1125 13:05:37.911188 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerName="util" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.911203 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerName="util" Nov 25 13:05:37 crc kubenswrapper[4688]: E1125 13:05:37.911219 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerName="extract" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.911225 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerName="extract" Nov 25 13:05:37 crc kubenswrapper[4688]: E1125 13:05:37.911277 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerName="pull" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.911284 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerName="pull" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.911454 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f264cf40-eeb7-48d8-93d5-af0c6953390e" containerName="extract" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.912029 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.919280 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.919492 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-qt2n4" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.919699 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 25 13:05:37 crc kubenswrapper[4688]: I1125 13:05:37.973387 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.001388 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr4v\" (UniqueName: \"kubernetes.io/projected/48743801-c673-4010-931f-62cdb6ecaa61-kube-api-access-4gr4v\") pod \"obo-prometheus-operator-668cf9dfbb-9n7g5\" (UID: \"48743801-c673-4010-931f-62cdb6ecaa61\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.078580 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.079976 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.090388 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qtjww" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.090599 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.095441 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.097080 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.109079 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gr4v\" (UniqueName: \"kubernetes.io/projected/48743801-c673-4010-931f-62cdb6ecaa61-kube-api-access-4gr4v\") pod \"obo-prometheus-operator-668cf9dfbb-9n7g5\" (UID: \"48743801-c673-4010-931f-62cdb6ecaa61\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.119012 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.132327 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.150876 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gr4v\" (UniqueName: \"kubernetes.io/projected/48743801-c673-4010-931f-62cdb6ecaa61-kube-api-access-4gr4v\") pod \"obo-prometheus-operator-668cf9dfbb-9n7g5\" (UID: \"48743801-c673-4010-931f-62cdb6ecaa61\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.211292 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4d68b3c-cac2-4e12-ac0c-788a6e134a8a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk\" (UID: \"c4d68b3c-cac2-4e12-ac0c-788a6e134a8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.211770 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb4b1751-6bd8-418b-a987-7314862f08dc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s\" (UID: \"fb4b1751-6bd8-418b-a987-7314862f08dc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.211800 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb4b1751-6bd8-418b-a987-7314862f08dc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s\" (UID: \"fb4b1751-6bd8-418b-a987-7314862f08dc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.211824 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4d68b3c-cac2-4e12-ac0c-788a6e134a8a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk\" (UID: \"c4d68b3c-cac2-4e12-ac0c-788a6e134a8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.241291 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-fv2lb"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.242920 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.246730 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rjzmc" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.246936 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.259820 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.265070 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-fv2lb"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.313122 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgxnd\" (UniqueName: \"kubernetes.io/projected/603a75fc-30c7-4bcf-98ee-1b24c2c0c93c-kube-api-access-vgxnd\") pod \"observability-operator-d8bb48f5d-fv2lb\" (UID: \"603a75fc-30c7-4bcf-98ee-1b24c2c0c93c\") " pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.313197 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4d68b3c-cac2-4e12-ac0c-788a6e134a8a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk\" (UID: \"c4d68b3c-cac2-4e12-ac0c-788a6e134a8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.313224 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/603a75fc-30c7-4bcf-98ee-1b24c2c0c93c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-fv2lb\" (UID: \"603a75fc-30c7-4bcf-98ee-1b24c2c0c93c\") " pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.313372 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb4b1751-6bd8-418b-a987-7314862f08dc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s\" (UID: \"fb4b1751-6bd8-418b-a987-7314862f08dc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.313414 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb4b1751-6bd8-418b-a987-7314862f08dc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s\" (UID: \"fb4b1751-6bd8-418b-a987-7314862f08dc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.313451 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4d68b3c-cac2-4e12-ac0c-788a6e134a8a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk\" (UID: \"c4d68b3c-cac2-4e12-ac0c-788a6e134a8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.319447 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4d68b3c-cac2-4e12-ac0c-788a6e134a8a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk\" (UID: \"c4d68b3c-cac2-4e12-ac0c-788a6e134a8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.321291 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb4b1751-6bd8-418b-a987-7314862f08dc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s\" (UID: \"fb4b1751-6bd8-418b-a987-7314862f08dc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.326109 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4d68b3c-cac2-4e12-ac0c-788a6e134a8a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk\" (UID: \"c4d68b3c-cac2-4e12-ac0c-788a6e134a8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.326204 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb4b1751-6bd8-418b-a987-7314862f08dc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s\" (UID: \"fb4b1751-6bd8-418b-a987-7314862f08dc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.381195 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-md9tl"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.382609 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.391168 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-5wfs6" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.400654 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-md9tl"] Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.415249 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgxnd\" (UniqueName: \"kubernetes.io/projected/603a75fc-30c7-4bcf-98ee-1b24c2c0c93c-kube-api-access-vgxnd\") pod \"observability-operator-d8bb48f5d-fv2lb\" (UID: \"603a75fc-30c7-4bcf-98ee-1b24c2c0c93c\") " pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.415301 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/603a75fc-30c7-4bcf-98ee-1b24c2c0c93c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-fv2lb\" (UID: \"603a75fc-30c7-4bcf-98ee-1b24c2c0c93c\") " pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.421861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/603a75fc-30c7-4bcf-98ee-1b24c2c0c93c-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-fv2lb\" (UID: \"603a75fc-30c7-4bcf-98ee-1b24c2c0c93c\") " pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.434079 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.448554 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgxnd\" (UniqueName: \"kubernetes.io/projected/603a75fc-30c7-4bcf-98ee-1b24c2c0c93c-kube-api-access-vgxnd\") pod \"observability-operator-d8bb48f5d-fv2lb\" (UID: \"603a75fc-30c7-4bcf-98ee-1b24c2c0c93c\") " pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.508104 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.517016 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7zt\" (UniqueName: \"kubernetes.io/projected/6995a349-20f2-40e7-a7f9-0ee6c8535bd1-kube-api-access-pv7zt\") pod \"perses-operator-5446b9c989-md9tl\" (UID: \"6995a349-20f2-40e7-a7f9-0ee6c8535bd1\") " pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.517093 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6995a349-20f2-40e7-a7f9-0ee6c8535bd1-openshift-service-ca\") pod \"perses-operator-5446b9c989-md9tl\" (UID: \"6995a349-20f2-40e7-a7f9-0ee6c8535bd1\") " pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.573417 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.626776 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6995a349-20f2-40e7-a7f9-0ee6c8535bd1-openshift-service-ca\") pod \"perses-operator-5446b9c989-md9tl\" (UID: \"6995a349-20f2-40e7-a7f9-0ee6c8535bd1\") " pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.626971 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7zt\" (UniqueName: \"kubernetes.io/projected/6995a349-20f2-40e7-a7f9-0ee6c8535bd1-kube-api-access-pv7zt\") pod \"perses-operator-5446b9c989-md9tl\" (UID: \"6995a349-20f2-40e7-a7f9-0ee6c8535bd1\") " pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.628240 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6995a349-20f2-40e7-a7f9-0ee6c8535bd1-openshift-service-ca\") pod \"perses-operator-5446b9c989-md9tl\" (UID: \"6995a349-20f2-40e7-a7f9-0ee6c8535bd1\") " pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.655712 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7zt\" (UniqueName: \"kubernetes.io/projected/6995a349-20f2-40e7-a7f9-0ee6c8535bd1-kube-api-access-pv7zt\") pod \"perses-operator-5446b9c989-md9tl\" (UID: \"6995a349-20f2-40e7-a7f9-0ee6c8535bd1\") " pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.706007 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:05:38 crc kubenswrapper[4688]: I1125 13:05:38.909676 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5"] Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.061072 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s"] Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.269207 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk"] Nov 25 13:05:39 crc kubenswrapper[4688]: W1125 13:05:39.282504 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d68b3c_cac2_4e12_ac0c_788a6e134a8a.slice/crio-c591d4f2cccf367a2d02030f6b8a5a22aac281cc903f2181fc168e9d3ccebab9 WatchSource:0}: Error finding container c591d4f2cccf367a2d02030f6b8a5a22aac281cc903f2181fc168e9d3ccebab9: Status 404 returned error can't find the container with id c591d4f2cccf367a2d02030f6b8a5a22aac281cc903f2181fc168e9d3ccebab9 Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.359423 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-fv2lb"] Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.460873 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-md9tl"] Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.549788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" event={"ID":"603a75fc-30c7-4bcf-98ee-1b24c2c0c93c","Type":"ContainerStarted","Data":"fde1120f1835bad5ffb33d3cd5d5e6f00459a5eef145a41e7593f11be44adb0a"} Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.550981 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-md9tl" event={"ID":"6995a349-20f2-40e7-a7f9-0ee6c8535bd1","Type":"ContainerStarted","Data":"cdc15d00994423fa027b97875cb07fc4e15b34383c46fa22ee7e00daf8acc682"} Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.552388 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" event={"ID":"fb4b1751-6bd8-418b-a987-7314862f08dc","Type":"ContainerStarted","Data":"7ad6bbd1e210dd8f52a4c996c573eb88d4d88979e063d5b3464d111aa30ff017"} Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.554330 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" event={"ID":"c4d68b3c-cac2-4e12-ac0c-788a6e134a8a","Type":"ContainerStarted","Data":"c591d4f2cccf367a2d02030f6b8a5a22aac281cc903f2181fc168e9d3ccebab9"} Nov 25 13:05:39 crc kubenswrapper[4688]: I1125 13:05:39.555737 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5" event={"ID":"48743801-c673-4010-931f-62cdb6ecaa61","Type":"ContainerStarted","Data":"1f5485020f33106c653ee2d0c607852fa054923ce246346a0d7ea709bcbbed0c"} Nov 25 13:05:47 crc kubenswrapper[4688]: I1125 13:05:47.853674 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:05:47 crc kubenswrapper[4688]: I1125 13:05:47.854390 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:05:47 crc kubenswrapper[4688]: I1125 13:05:47.854463 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 13:05:47 crc kubenswrapper[4688]: I1125 13:05:47.855668 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 13:05:47 crc kubenswrapper[4688]: I1125 13:05:47.855790 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" gracePeriod=600 Nov 25 13:05:48 crc kubenswrapper[4688]: I1125 13:05:48.723396 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" exitCode=0 Nov 25 13:05:48 crc kubenswrapper[4688]: I1125 13:05:48.723466 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60"} Nov 25 13:05:48 crc kubenswrapper[4688]: I1125 13:05:48.723864 4688 scope.go:117] "RemoveContainer" containerID="d7f6010fb0df9c2e667539baa0196010cf78e5e21c863cb43d0f9c102bd523c5" Nov 25 13:05:49 crc kubenswrapper[4688]: E1125 13:05:49.166802 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:05:49 crc kubenswrapper[4688]: I1125 13:05:49.736372 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:05:49 crc kubenswrapper[4688]: E1125 13:05:49.736709 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:05:56 crc kubenswrapper[4688]: E1125 13:05:56.717990 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Nov 25 13:05:56 crc kubenswrapper[4688]: E1125 13:05:56.718713 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s_openshift-operators(fb4b1751-6bd8-418b-a987-7314862f08dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 13:05:56 crc kubenswrapper[4688]: E1125 13:05:56.719981 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" podUID="fb4b1751-6bd8-418b-a987-7314862f08dc" Nov 25 13:05:56 crc kubenswrapper[4688]: E1125 13:05:56.828152 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" podUID="fb4b1751-6bd8-418b-a987-7314862f08dc" Nov 25 13:06:00 crc kubenswrapper[4688]: E1125 13:06:00.266047 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" Nov 25 13:06:00 crc kubenswrapper[4688]: E1125 13:06:00.267163 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:e718854a7d6ca8accf0fa72db0eb902e46c44d747ad51dc3f06bba0cefaa3c01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:91531137fc1dcd740e277e0f65e120a0176a16f788c14c27925b61aa0b792ade,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:897e1bfad1187062725b54d87107bd0155972257a50d8335dd29e1999b828a4f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:95fe5b5746ca8c07ac9217ce2d8ac8e6afad17af210f9d8e0074df1310b209a8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:e9d9a89e4d8126a62b1852055482258ee528cac6398dd5d43ebad75ace0f33c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:ec684a0645ceb917b019af7ddba68c3533416e356ab0d0320a30e75ca7ebb31b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:3b9693fcde9b3a9494fb04735b1f7cfd0426f10be820fdc3f024175c0d3df1c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:580606f194180accc8abba099e17a26dca7522ec6d233fa2fdd40312771703e3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:e03777be39e71701935059cd877603874a13ac94daa73219d4e5e545599d78a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:aa47256193cfd2877853878e1ae97d2ab8b8e5deae62b387cbfad02b284d379c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:c595ff56b2cb85514bf4784db6ddb82e4e657e3e708a7fb695fc4997379a94d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:45a4ec2a519bcec99e886aa91596d5356a2414a2bd103baaef9fa7838c672eb2,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgxnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-d8bb48f5d-fv2lb_openshift-operators(603a75fc-30c7-4bcf-98ee-1b24c2c0c93c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 13:06:00 crc kubenswrapper[4688]: E1125 13:06:00.268412 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" podUID="603a75fc-30c7-4bcf-98ee-1b24c2c0c93c" Nov 25 13:06:00 crc kubenswrapper[4688]: I1125 13:06:00.746618 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:06:00 crc kubenswrapper[4688]: E1125 13:06:00.747060 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:06:00 crc kubenswrapper[4688]: I1125 13:06:00.873809 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" event={"ID":"c4d68b3c-cac2-4e12-ac0c-788a6e134a8a","Type":"ContainerStarted","Data":"d770a99289147a9c55edc1eee270024583212e5cbe01e0649b82fdcf26f907a7"} Nov 25 13:06:00 crc kubenswrapper[4688]: I1125 13:06:00.877268 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-md9tl" event={"ID":"6995a349-20f2-40e7-a7f9-0ee6c8535bd1","Type":"ContainerStarted","Data":"b98611917927f0074a9ab17247800c24c739fad4b07370afa891cb9a06a64596"} Nov 25 13:06:00 crc kubenswrapper[4688]: I1125 13:06:00.877317 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:06:00 crc kubenswrapper[4688]: E1125 13:06:00.877905 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb\\\"\"" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" podUID="603a75fc-30c7-4bcf-98ee-1b24c2c0c93c" Nov 25 13:06:00 crc kubenswrapper[4688]: I1125 13:06:00.900038 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk" podStartSLOduration=1.853134769 podStartE2EDuration="22.900018643s" podCreationTimestamp="2025-11-25 13:05:38 +0000 UTC" firstStartedPulling="2025-11-25 13:05:39.285417315 +0000 UTC m=+3089.395046183" lastFinishedPulling="2025-11-25 13:06:00.332300839 +0000 UTC m=+3110.441930057" observedRunningTime="2025-11-25 13:06:00.894079881 +0000 UTC m=+3111.003708749" watchObservedRunningTime="2025-11-25 13:06:00.900018643 +0000 UTC m=+3111.009647531" Nov 25 13:06:00 crc kubenswrapper[4688]: I1125 13:06:00.960422 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-md9tl" podStartSLOduration=2.111952321 podStartE2EDuration="22.960404408s" podCreationTimestamp="2025-11-25 13:05:38 +0000 UTC" firstStartedPulling="2025-11-25 13:05:39.466189371 +0000 UTC m=+3089.575818239" lastFinishedPulling="2025-11-25 13:06:00.314641448 +0000 UTC m=+3110.424270326" observedRunningTime="2025-11-25 13:06:00.957486763 +0000 UTC m=+3111.067115631" watchObservedRunningTime="2025-11-25 13:06:00.960404408 +0000 UTC m=+3111.070033276" Nov 25 13:06:01 crc kubenswrapper[4688]: I1125 13:06:01.888493 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5" event={"ID":"48743801-c673-4010-931f-62cdb6ecaa61","Type":"ContainerStarted","Data":"f914c0992753e13484a96463c1463323712c4077d8e7703f1655bae74628f3b0"} Nov 25 13:06:01 crc kubenswrapper[4688]: I1125 13:06:01.906956 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9n7g5" podStartSLOduration=3.529724424 podStartE2EDuration="24.906938022s" podCreationTimestamp="2025-11-25 13:05:37 +0000 UTC" firstStartedPulling="2025-11-25 13:05:38.937335367 +0000 UTC m=+3089.046964235" lastFinishedPulling="2025-11-25 13:06:00.314548965 +0000 UTC m=+3110.424177833" observedRunningTime="2025-11-25 13:06:01.900862806 +0000 UTC m=+3112.010491684" watchObservedRunningTime="2025-11-25 13:06:01.906938022 +0000 UTC m=+3112.016566910" Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.233281 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.234012 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-api" containerID="cri-o://a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e" gracePeriod=30 Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.234093 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-listener" containerID="cri-o://18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b" gracePeriod=30 Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.234135 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-evaluator" containerID="cri-o://9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12" gracePeriod=30 Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.234150 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-notifier" containerID="cri-o://aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135" gracePeriod=30 Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.909930 4688 generic.go:334] "Generic (PLEG): container finished" podID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerID="9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12" exitCode=0 Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.909965 4688 generic.go:334] "Generic (PLEG): container finished" podID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerID="a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e" exitCode=0 Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.909983 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerDied","Data":"9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12"} Nov 25 13:06:03 crc kubenswrapper[4688]: I1125 13:06:03.910004 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerDied","Data":"a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e"} Nov 25 13:06:08 crc kubenswrapper[4688]: I1125 13:06:08.712206 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-md9tl" Nov 25 13:06:08 crc kubenswrapper[4688]: I1125 13:06:08.955218 4688 generic.go:334] "Generic (PLEG): container finished" podID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerID="aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135" exitCode=0 Nov 25 13:06:08 crc kubenswrapper[4688]: I1125 13:06:08.955263 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerDied","Data":"aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135"} Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.444306 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.629413 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-config-data\") pod \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.629625 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-scripts\") pod \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.629718 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-public-tls-certs\") pod \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.629769 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf4nj\" (UniqueName: \"kubernetes.io/projected/ba2920a0-7f0c-464e-aa20-d29a32ed6850-kube-api-access-qf4nj\") pod \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.629845 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-internal-tls-certs\") pod \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.629936 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-combined-ca-bundle\") pod \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\" (UID: \"ba2920a0-7f0c-464e-aa20-d29a32ed6850\") " Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.634993 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2920a0-7f0c-464e-aa20-d29a32ed6850-kube-api-access-qf4nj" (OuterVolumeSpecName: "kube-api-access-qf4nj") pod "ba2920a0-7f0c-464e-aa20-d29a32ed6850" (UID: "ba2920a0-7f0c-464e-aa20-d29a32ed6850"). InnerVolumeSpecName "kube-api-access-qf4nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.635008 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-scripts" (OuterVolumeSpecName: "scripts") pod "ba2920a0-7f0c-464e-aa20-d29a32ed6850" (UID: "ba2920a0-7f0c-464e-aa20-d29a32ed6850"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.694345 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ba2920a0-7f0c-464e-aa20-d29a32ed6850" (UID: "ba2920a0-7f0c-464e-aa20-d29a32ed6850"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.703780 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ba2920a0-7f0c-464e-aa20-d29a32ed6850" (UID: "ba2920a0-7f0c-464e-aa20-d29a32ed6850"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.732749 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.732786 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf4nj\" (UniqueName: \"kubernetes.io/projected/ba2920a0-7f0c-464e-aa20-d29a32ed6850-kube-api-access-qf4nj\") on node \"crc\" DevicePath \"\"" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.732796 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.732806 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.763784 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba2920a0-7f0c-464e-aa20-d29a32ed6850" (UID: "ba2920a0-7f0c-464e-aa20-d29a32ed6850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.770763 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-config-data" (OuterVolumeSpecName: "config-data") pod "ba2920a0-7f0c-464e-aa20-d29a32ed6850" (UID: "ba2920a0-7f0c-464e-aa20-d29a32ed6850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.834857 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.834896 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2920a0-7f0c-464e-aa20-d29a32ed6850-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.969044 4688 generic.go:334] "Generic (PLEG): container finished" podID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerID="18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b" exitCode=0 Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.969403 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerDied","Data":"18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b"} Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.969565 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba2920a0-7f0c-464e-aa20-d29a32ed6850","Type":"ContainerDied","Data":"c50c24bc241277613bb57062d84cab349173f0aeae20943f84ac95fae84d6027"} Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.969585 4688 scope.go:117] "RemoveContainer" containerID="18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.969487 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:06:09 crc kubenswrapper[4688]: I1125 13:06:09.997760 4688 scope.go:117] "RemoveContainer" containerID="aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.012636 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.021188 4688 scope.go:117] "RemoveContainer" containerID="9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.022793 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.045360 4688 scope.go:117] "RemoveContainer" containerID="a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.045960 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 13:06:10 crc kubenswrapper[4688]: E1125 13:06:10.046490 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-listener" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.046568 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-listener" Nov 25 13:06:10 crc kubenswrapper[4688]: E1125 13:06:10.046619 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-notifier" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.046635 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-notifier" Nov 25 13:06:10 crc kubenswrapper[4688]: E1125 13:06:10.046651 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-api" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.046660 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-api" Nov 25 13:06:10 crc kubenswrapper[4688]: E1125 13:06:10.046683 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-evaluator" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.046691 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-evaluator" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.046942 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-listener" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.046970 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-evaluator" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.047005 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-api" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.047050 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" containerName="aodh-notifier" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.049280 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.051047 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.053034 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.053192 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.053299 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.053404 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tn6ss" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.057340 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.126410 4688 scope.go:117] "RemoveContainer" containerID="18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b" Nov 25 13:06:10 crc kubenswrapper[4688]: E1125 13:06:10.126961 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b\": container with ID starting with 18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b not found: ID does not exist" containerID="18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.126992 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b"} err="failed to get container status \"18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b\": rpc error: code = NotFound desc = could not find container \"18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b\": container with ID starting with 18206cf69e4a729ed817e13ccc659b9f1a9d8772b08015f69ccc5e999447463b not found: ID does not exist" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.127012 4688 scope.go:117] "RemoveContainer" containerID="aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135" Nov 25 13:06:10 crc kubenswrapper[4688]: E1125 13:06:10.127343 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135\": container with ID starting with aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135 not found: ID does not exist" containerID="aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.127454 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135"} err="failed to get container status \"aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135\": rpc error: code = NotFound desc = could not find container \"aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135\": container with ID starting with aaf4e533adcb26852d9334efa1e9b20afe4e003d3b88047fe7840f886665c135 not found: ID does not exist" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.127574 4688 scope.go:117] "RemoveContainer" containerID="9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12" Nov 25 13:06:10 crc kubenswrapper[4688]: E1125 13:06:10.127938 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12\": container with ID starting with 9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12 not found: ID does not exist" containerID="9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.127961 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12"} err="failed to get container status \"9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12\": rpc error: code = NotFound desc = could not find container \"9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12\": container with ID starting with 9e383b254ec9b7fa502292819244586c19d0ed35cea55d1249bac531f87f0f12 not found: ID does not exist" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.127976 4688 scope.go:117] "RemoveContainer" containerID="a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e" Nov 25 13:06:10 crc kubenswrapper[4688]: E1125 13:06:10.128293 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e\": container with ID starting with a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e not found: ID does not exist" containerID="a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.128346 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e"} err="failed to get container status \"a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e\": rpc error: code = NotFound desc = could not find container \"a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e\": container with ID starting with a7e18bb9bc1048bf9e70c7907dca00daf0d85cf36fdadd3a4d1f6a545b4dd84e not found: ID does not exist" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.139961 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-scripts\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.139994 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-config-data\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.140010 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.140042 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-public-tls-certs\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.140190 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdb4p\" (UniqueName: \"kubernetes.io/projected/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-kube-api-access-tdb4p\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.140260 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-internal-tls-certs\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.241844 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-scripts\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.241895 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-config-data\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.241912 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.241941 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-public-tls-certs\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.242015 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdb4p\" (UniqueName: \"kubernetes.io/projected/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-kube-api-access-tdb4p\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.242059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-internal-tls-certs\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.245759 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.245886 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-config-data\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.245911 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-internal-tls-certs\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.245964 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-scripts\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.246125 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-public-tls-certs\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.258759 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdb4p\" (UniqueName: \"kubernetes.io/projected/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-kube-api-access-tdb4p\") pod \"aodh-0\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.397334 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.755161 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2920a0-7f0c-464e-aa20-d29a32ed6850" path="/var/lib/kubelet/pods/ba2920a0-7f0c-464e-aa20-d29a32ed6850/volumes" Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.864405 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:06:10 crc kubenswrapper[4688]: W1125 13:06:10.864883 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05953b8a_db86_4a1d_90e4_7cfa93fb87e3.slice/crio-499f0b035f98ade1e20d8f923b29f4ad58b70103c117be197cf820e7c2b0590c WatchSource:0}: Error finding container 499f0b035f98ade1e20d8f923b29f4ad58b70103c117be197cf820e7c2b0590c: Status 404 returned error can't find the container with id 499f0b035f98ade1e20d8f923b29f4ad58b70103c117be197cf820e7c2b0590c Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.867262 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 13:06:10 crc kubenswrapper[4688]: I1125 13:06:10.980813 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerStarted","Data":"499f0b035f98ade1e20d8f923b29f4ad58b70103c117be197cf820e7c2b0590c"} Nov 25 13:06:11 crc kubenswrapper[4688]: I1125 13:06:11.740457 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:06:11 crc kubenswrapper[4688]: E1125 13:06:11.742752 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:06:13 crc kubenswrapper[4688]: I1125 13:06:13.002330 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" event={"ID":"fb4b1751-6bd8-418b-a987-7314862f08dc","Type":"ContainerStarted","Data":"6a33a6f918a0ef071eb5d9d2a2b05d503b15d0ab03cc29d099ffbcc05a6e7133"} Nov 25 13:06:13 crc kubenswrapper[4688]: I1125 13:06:13.006282 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerStarted","Data":"2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e"} Nov 25 13:06:13 crc kubenswrapper[4688]: I1125 13:06:13.760843 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s" podStartSLOduration=-9223372001.093948 podStartE2EDuration="35.760827081s" podCreationTimestamp="2025-11-25 13:05:38 +0000 UTC" firstStartedPulling="2025-11-25 13:05:39.070670007 +0000 UTC m=+3089.180298875" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:06:13.030793674 +0000 UTC m=+3123.140422562" watchObservedRunningTime="2025-11-25 13:06:13.760827081 +0000 UTC m=+3123.870455949" Nov 25 13:06:14 crc kubenswrapper[4688]: I1125 13:06:14.019302 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerStarted","Data":"3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680"} Nov 25 13:06:15 crc kubenswrapper[4688]: I1125 13:06:15.032475 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerStarted","Data":"41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a"} Nov 25 13:06:15 crc kubenswrapper[4688]: I1125 13:06:15.038796 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" event={"ID":"603a75fc-30c7-4bcf-98ee-1b24c2c0c93c","Type":"ContainerStarted","Data":"41669b563566cd15f212bbd7ad0a0355fbcd002f1cf1aaa158a1125f12bb92b8"} Nov 25 13:06:15 crc kubenswrapper[4688]: I1125 13:06:15.039303 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:06:15 crc kubenswrapper[4688]: I1125 13:06:15.051389 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" Nov 25 13:06:15 crc kubenswrapper[4688]: I1125 13:06:15.069170 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-fv2lb" podStartSLOduration=1.8621199210000001 podStartE2EDuration="37.069152806s" podCreationTimestamp="2025-11-25 13:05:38 +0000 UTC" firstStartedPulling="2025-11-25 13:05:39.365459146 +0000 UTC m=+3089.475088014" lastFinishedPulling="2025-11-25 13:06:14.572492031 +0000 UTC m=+3124.682120899" observedRunningTime="2025-11-25 13:06:15.060277718 +0000 UTC m=+3125.169906586" watchObservedRunningTime="2025-11-25 13:06:15.069152806 +0000 UTC m=+3125.178781674" Nov 25 13:06:16 crc kubenswrapper[4688]: I1125 13:06:16.049598 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerStarted","Data":"5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd"} Nov 25 13:06:16 crc kubenswrapper[4688]: I1125 13:06:16.082661 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.111279383 podStartE2EDuration="6.082637466s" podCreationTimestamp="2025-11-25 13:06:10 +0000 UTC" firstStartedPulling="2025-11-25 13:06:10.867055935 +0000 UTC m=+3120.976684803" lastFinishedPulling="2025-11-25 13:06:14.838414018 +0000 UTC m=+3124.948042886" observedRunningTime="2025-11-25 13:06:16.068928193 +0000 UTC m=+3126.178557061" watchObservedRunningTime="2025-11-25 13:06:16.082637466 +0000 UTC m=+3126.192266334" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.020828 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.022867 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.024451 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.024715 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-x5nxd" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.026219 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.026352 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.026475 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.065404 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.134802 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.134861 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.134963 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/585e8555-4253-49c0-a482-9aefd967e4d2-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.135035 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz6km\" (UniqueName: \"kubernetes.io/projected/585e8555-4253-49c0-a482-9aefd967e4d2-kube-api-access-jz6km\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.135060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.135312 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/585e8555-4253-49c0-a482-9aefd967e4d2-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.135399 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/585e8555-4253-49c0-a482-9aefd967e4d2-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.236594 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.236742 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/585e8555-4253-49c0-a482-9aefd967e4d2-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.236815 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/585e8555-4253-49c0-a482-9aefd967e4d2-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.236860 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.236890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.236930 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/585e8555-4253-49c0-a482-9aefd967e4d2-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.236973 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz6km\" (UniqueName: \"kubernetes.io/projected/585e8555-4253-49c0-a482-9aefd967e4d2-kube-api-access-jz6km\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.237782 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/585e8555-4253-49c0-a482-9aefd967e4d2-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.247100 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.247897 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.249883 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/585e8555-4253-49c0-a482-9aefd967e4d2-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.254169 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/585e8555-4253-49c0-a482-9aefd967e4d2-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.264966 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/585e8555-4253-49c0-a482-9aefd967e4d2-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.291237 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz6km\" (UniqueName: \"kubernetes.io/projected/585e8555-4253-49c0-a482-9aefd967e4d2-kube-api-access-jz6km\") pod \"alertmanager-metric-storage-0\" (UID: \"585e8555-4253-49c0-a482-9aefd967e4d2\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.344313 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.659607 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.666150 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.671415 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-g9v2p" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.672294 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.672468 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.672632 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.680128 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.680281 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.697534 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.748602 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.748656 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.748709 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.748776 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.748797 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.748959 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.749103 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.749167 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjm27\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-kube-api-access-jjm27\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.853660 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.853716 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.853765 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.853811 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.853843 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.853867 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.853924 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.853962 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjm27\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-kube-api-access-jjm27\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.857275 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.864134 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.861110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.868241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.874006 4688 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.874049 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/04c24bb4fab9fd71e16796221da4d155d5418a2a713de67cbe514390cb74442c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.874551 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.878124 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.891285 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjm27\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-kube-api-access-jjm27\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:17 crc kubenswrapper[4688]: I1125 13:06:17.969211 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"prometheus-metric-storage-0\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:18 crc kubenswrapper[4688]: I1125 13:06:18.034316 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:06:18 crc kubenswrapper[4688]: I1125 13:06:18.134349 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 13:06:18 crc kubenswrapper[4688]: W1125 13:06:18.153530 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod585e8555_4253_49c0_a482_9aefd967e4d2.slice/crio-67ba0f70188775b4c848f8379b728d4967e27472103bd870a4aaf7d258361335 WatchSource:0}: Error finding container 67ba0f70188775b4c848f8379b728d4967e27472103bd870a4aaf7d258361335: Status 404 returned error can't find the container with id 67ba0f70188775b4c848f8379b728d4967e27472103bd870a4aaf7d258361335 Nov 25 13:06:18 crc kubenswrapper[4688]: I1125 13:06:18.733973 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:06:19 crc kubenswrapper[4688]: I1125 13:06:19.086370 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerStarted","Data":"6afa99936a4ec24d47a05b940640f4407f8bd727c503465b9cb7b1f9dcda8b94"} Nov 25 13:06:19 crc kubenswrapper[4688]: I1125 13:06:19.104715 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"585e8555-4253-49c0-a482-9aefd967e4d2","Type":"ContainerStarted","Data":"67ba0f70188775b4c848f8379b728d4967e27472103bd870a4aaf7d258361335"} Nov 25 13:06:22 crc kubenswrapper[4688]: I1125 13:06:22.740587 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:06:22 crc kubenswrapper[4688]: E1125 13:06:22.741389 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:06:26 crc kubenswrapper[4688]: I1125 13:06:26.173831 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerStarted","Data":"cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8"} Nov 25 13:06:27 crc kubenswrapper[4688]: I1125 13:06:27.185072 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"585e8555-4253-49c0-a482-9aefd967e4d2","Type":"ContainerStarted","Data":"5db77bf0e7a8c8b7bc8ee8027d1af4debdaec2bf58d1671d151334f8786248b0"} Nov 25 13:06:33 crc kubenswrapper[4688]: I1125 13:06:33.243166 4688 generic.go:334] "Generic (PLEG): container finished" podID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerID="cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8" exitCode=0 Nov 25 13:06:33 crc kubenswrapper[4688]: I1125 13:06:33.243234 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerDied","Data":"cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8"} Nov 25 13:06:34 crc kubenswrapper[4688]: I1125 13:06:34.259565 4688 generic.go:334] "Generic (PLEG): container finished" podID="585e8555-4253-49c0-a482-9aefd967e4d2" containerID="5db77bf0e7a8c8b7bc8ee8027d1af4debdaec2bf58d1671d151334f8786248b0" exitCode=0 Nov 25 13:06:34 crc kubenswrapper[4688]: I1125 13:06:34.259626 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"585e8555-4253-49c0-a482-9aefd967e4d2","Type":"ContainerDied","Data":"5db77bf0e7a8c8b7bc8ee8027d1af4debdaec2bf58d1671d151334f8786248b0"} Nov 25 13:06:34 crc kubenswrapper[4688]: I1125 13:06:34.739643 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:06:34 crc kubenswrapper[4688]: E1125 13:06:34.740007 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:06:43 crc kubenswrapper[4688]: I1125 13:06:43.350068 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"585e8555-4253-49c0-a482-9aefd967e4d2","Type":"ContainerStarted","Data":"e38b0afb6f0d7558cf687bf605fa092e4be41fab828d538743cc871081572509"} Nov 25 13:06:43 crc kubenswrapper[4688]: I1125 13:06:43.353260 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerStarted","Data":"2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682"} Nov 25 13:06:46 crc kubenswrapper[4688]: I1125 13:06:46.740039 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:06:46 crc kubenswrapper[4688]: E1125 13:06:46.741044 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:06:47 crc kubenswrapper[4688]: I1125 13:06:47.393281 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerStarted","Data":"9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0"} Nov 25 13:06:47 crc kubenswrapper[4688]: I1125 13:06:47.396172 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"585e8555-4253-49c0-a482-9aefd967e4d2","Type":"ContainerStarted","Data":"b030ea5e76aa49f6da768082eb16cd0ae26eacead85f1b5837d122cb8f0b80d1"} Nov 25 13:06:47 crc kubenswrapper[4688]: I1125 13:06:47.396446 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:47 crc kubenswrapper[4688]: I1125 13:06:47.399643 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 25 13:06:47 crc kubenswrapper[4688]: I1125 13:06:47.429213 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.877209548 podStartE2EDuration="31.429195446s" podCreationTimestamp="2025-11-25 13:06:16 +0000 UTC" firstStartedPulling="2025-11-25 13:06:18.164161605 +0000 UTC m=+3128.273790473" lastFinishedPulling="2025-11-25 13:06:42.716147503 +0000 UTC m=+3152.825776371" observedRunningTime="2025-11-25 13:06:47.418817635 +0000 UTC m=+3157.528446563" watchObservedRunningTime="2025-11-25 13:06:47.429195446 +0000 UTC m=+3157.538824314" Nov 25 13:06:50 crc kubenswrapper[4688]: I1125 13:06:50.430020 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerStarted","Data":"43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a"} Nov 25 13:06:50 crc kubenswrapper[4688]: I1125 13:06:50.463316 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.499592323 podStartE2EDuration="34.463299954s" podCreationTimestamp="2025-11-25 13:06:16 +0000 UTC" firstStartedPulling="2025-11-25 13:06:18.772308707 +0000 UTC m=+3128.881937575" lastFinishedPulling="2025-11-25 13:06:49.736016338 +0000 UTC m=+3159.845645206" observedRunningTime="2025-11-25 13:06:50.458830203 +0000 UTC m=+3160.568459071" watchObservedRunningTime="2025-11-25 13:06:50.463299954 +0000 UTC m=+3160.572928822" Nov 25 13:06:53 crc kubenswrapper[4688]: I1125 13:06:53.034423 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:01 crc kubenswrapper[4688]: I1125 13:07:01.740718 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:07:01 crc kubenswrapper[4688]: E1125 13:07:01.742333 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:07:03 crc kubenswrapper[4688]: I1125 13:07:03.034775 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:03 crc kubenswrapper[4688]: I1125 13:07:03.038035 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:03 crc kubenswrapper[4688]: I1125 13:07:03.586866 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.022274 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.022825 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="c22633a0-aeed-4e1d-a178-05b245f91b77" containerName="openstackclient" containerID="cri-o://c1efd3478df99f65c6a797aea85e0e1dd8346805300a3e9a19c89c410fa6a14f" gracePeriod=2 Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.031972 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.062011 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 13:07:05 crc kubenswrapper[4688]: E1125 13:07:05.062492 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22633a0-aeed-4e1d-a178-05b245f91b77" containerName="openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.062508 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22633a0-aeed-4e1d-a178-05b245f91b77" containerName="openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.062713 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22633a0-aeed-4e1d-a178-05b245f91b77" containerName="openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.063500 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.069468 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c22633a0-aeed-4e1d-a178-05b245f91b77" podUID="3ecf7482-aefd-4e71-a856-9818296c91e7" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.080842 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.138758 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3ecf7482-aefd-4e71-a856-9818296c91e7-openstack-config\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.138819 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecf7482-aefd-4e71-a856-9818296c91e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.138931 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jw9z\" (UniqueName: \"kubernetes.io/projected/3ecf7482-aefd-4e71-a856-9818296c91e7-kube-api-access-9jw9z\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.139106 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3ecf7482-aefd-4e71-a856-9818296c91e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.240875 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3ecf7482-aefd-4e71-a856-9818296c91e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.241008 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3ecf7482-aefd-4e71-a856-9818296c91e7-openstack-config\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.241052 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecf7482-aefd-4e71-a856-9818296c91e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.241119 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jw9z\" (UniqueName: \"kubernetes.io/projected/3ecf7482-aefd-4e71-a856-9818296c91e7-kube-api-access-9jw9z\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.242454 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3ecf7482-aefd-4e71-a856-9818296c91e7-openstack-config\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.246995 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3ecf7482-aefd-4e71-a856-9818296c91e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.249041 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecf7482-aefd-4e71-a856-9818296c91e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.272168 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jw9z\" (UniqueName: \"kubernetes.io/projected/3ecf7482-aefd-4e71-a856-9818296c91e7-kube-api-access-9jw9z\") pod \"openstackclient\" (UID: \"3ecf7482-aefd-4e71-a856-9818296c91e7\") " pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.389966 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.519709 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.520343 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-api" containerID="cri-o://2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e" gracePeriod=30 Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.520843 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-listener" containerID="cri-o://5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd" gracePeriod=30 Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.520922 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-notifier" containerID="cri-o://41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a" gracePeriod=30 Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.520976 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-evaluator" containerID="cri-o://3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680" gracePeriod=30 Nov 25 13:07:05 crc kubenswrapper[4688]: I1125 13:07:05.796072 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.296821 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.297106 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="prometheus" containerID="cri-o://2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682" gracePeriod=600 Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.297187 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="thanos-sidecar" containerID="cri-o://43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a" gracePeriod=600 Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.297239 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="config-reloader" containerID="cri-o://9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0" gracePeriod=600 Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.629093 4688 generic.go:334] "Generic (PLEG): container finished" podID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerID="43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a" exitCode=0 Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.629427 4688 generic.go:334] "Generic (PLEG): container finished" podID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerID="2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682" exitCode=0 Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.629180 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerDied","Data":"43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a"} Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.629496 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerDied","Data":"2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682"} Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.632230 4688 generic.go:334] "Generic (PLEG): container finished" podID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerID="3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680" exitCode=0 Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.632248 4688 generic.go:334] "Generic (PLEG): container finished" podID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerID="2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e" exitCode=0 Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.632285 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerDied","Data":"3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680"} Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.632300 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerDied","Data":"2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e"} Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.633778 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3ecf7482-aefd-4e71-a856-9818296c91e7","Type":"ContainerStarted","Data":"e26f94bec57cb86671c02025169309312f89c80e92c87e11dca56ce72a0880d2"} Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.633816 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3ecf7482-aefd-4e71-a856-9818296c91e7","Type":"ContainerStarted","Data":"b03428da03ea8884ef6422f92994fe98e46dd0ff2870a57e7fbe08ed3719a716"} Nov 25 13:07:06 crc kubenswrapper[4688]: I1125 13:07:06.661702 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.661679406 podStartE2EDuration="1.661679406s" podCreationTimestamp="2025-11-25 13:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:07:06.650031661 +0000 UTC m=+3176.759660529" watchObservedRunningTime="2025-11-25 13:07:06.661679406 +0000 UTC m=+3176.771308264" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.226798 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.314061 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjm27\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-kube-api-access-jjm27\") pod \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.314206 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.314239 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config-out\") pod \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.314274 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-web-config\") pod \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.314318 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config\") pod \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.314344 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-prometheus-metric-storage-rulefiles-0\") pod \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.314403 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-tls-assets\") pod \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.314485 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-thanos-prometheus-http-client-file\") pod \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\" (UID: \"30d7af41-d7e2-4fd5-a5a0-59fa424ced37\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.315329 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "30d7af41-d7e2-4fd5-a5a0-59fa424ced37" (UID: "30d7af41-d7e2-4fd5-a5a0-59fa424ced37"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.401046 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config" (OuterVolumeSpecName: "config") pod "30d7af41-d7e2-4fd5-a5a0-59fa424ced37" (UID: "30d7af41-d7e2-4fd5-a5a0-59fa424ced37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.401661 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config-out" (OuterVolumeSpecName: "config-out") pod "30d7af41-d7e2-4fd5-a5a0-59fa424ced37" (UID: "30d7af41-d7e2-4fd5-a5a0-59fa424ced37"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.404735 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-kube-api-access-jjm27" (OuterVolumeSpecName: "kube-api-access-jjm27") pod "30d7af41-d7e2-4fd5-a5a0-59fa424ced37" (UID: "30d7af41-d7e2-4fd5-a5a0-59fa424ced37"). InnerVolumeSpecName "kube-api-access-jjm27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.409949 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "30d7af41-d7e2-4fd5-a5a0-59fa424ced37" (UID: "30d7af41-d7e2-4fd5-a5a0-59fa424ced37"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.410391 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "30d7af41-d7e2-4fd5-a5a0-59fa424ced37" (UID: "30d7af41-d7e2-4fd5-a5a0-59fa424ced37"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.418306 4688 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config-out\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.418344 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-config\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.418357 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.418372 4688 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.418386 4688 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.418399 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjm27\" (UniqueName: \"kubernetes.io/projected/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-kube-api-access-jjm27\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.505189 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-web-config" (OuterVolumeSpecName: "web-config") pod "30d7af41-d7e2-4fd5-a5a0-59fa424ced37" (UID: "30d7af41-d7e2-4fd5-a5a0-59fa424ced37"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.511761 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "30d7af41-d7e2-4fd5-a5a0-59fa424ced37" (UID: "30d7af41-d7e2-4fd5-a5a0-59fa424ced37"). InnerVolumeSpecName "pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.520638 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") on node \"crc\" " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.520670 4688 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30d7af41-d7e2-4fd5-a5a0-59fa424ced37-web-config\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.554164 4688 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.554344 4688 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c") on node "crc" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.622239 4688 reconciler_common.go:293] "Volume detached for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.643126 4688 generic.go:334] "Generic (PLEG): container finished" podID="c22633a0-aeed-4e1d-a178-05b245f91b77" containerID="c1efd3478df99f65c6a797aea85e0e1dd8346805300a3e9a19c89c410fa6a14f" exitCode=137 Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.647345 4688 generic.go:334] "Generic (PLEG): container finished" podID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerID="41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a" exitCode=0 Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.647413 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerDied","Data":"41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a"} Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.653141 4688 generic.go:334] "Generic (PLEG): container finished" podID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerID="9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0" exitCode=0 Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.653206 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.653227 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerDied","Data":"9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0"} Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.653291 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30d7af41-d7e2-4fd5-a5a0-59fa424ced37","Type":"ContainerDied","Data":"6afa99936a4ec24d47a05b940640f4407f8bd727c503465b9cb7b1f9dcda8b94"} Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.653309 4688 scope.go:117] "RemoveContainer" containerID="43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.703769 4688 scope.go:117] "RemoveContainer" containerID="9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.704558 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.722367 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.732341 4688 scope.go:117] "RemoveContainer" containerID="2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.739207 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.747402 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:07:07 crc kubenswrapper[4688]: E1125 13:07:07.747859 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="thanos-sidecar" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.747873 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="thanos-sidecar" Nov 25 13:07:07 crc kubenswrapper[4688]: E1125 13:07:07.747881 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="init-config-reloader" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.747888 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="init-config-reloader" Nov 25 13:07:07 crc kubenswrapper[4688]: E1125 13:07:07.747910 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="prometheus" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.747916 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="prometheus" Nov 25 13:07:07 crc kubenswrapper[4688]: E1125 13:07:07.747928 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="config-reloader" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.747934 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="config-reloader" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.748115 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="prometheus" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.748130 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="thanos-sidecar" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.748149 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" containerName="config-reloader" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.751786 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.754510 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.754685 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.754932 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.755086 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.755191 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-g9v2p" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.755658 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.758930 4688 scope.go:117] "RemoveContainer" containerID="cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.770685 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.771225 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.789505 4688 scope.go:117] "RemoveContainer" containerID="43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a" Nov 25 13:07:07 crc kubenswrapper[4688]: E1125 13:07:07.790004 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a\": container with ID starting with 43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a not found: ID does not exist" containerID="43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.790049 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a"} err="failed to get container status \"43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a\": rpc error: code = NotFound desc = could not find container \"43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a\": container with ID starting with 43bc24050ed7280d4c84c382288ab95c6104756813bf9ce2e82cf25a9c582f6a not found: ID does not exist" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.790075 4688 scope.go:117] "RemoveContainer" containerID="9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0" Nov 25 13:07:07 crc kubenswrapper[4688]: E1125 13:07:07.790616 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0\": container with ID starting with 9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0 not found: ID does not exist" containerID="9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.790640 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0"} err="failed to get container status \"9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0\": rpc error: code = NotFound desc = could not find container \"9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0\": container with ID starting with 9df5daa8b866b27d8b719174a88553f3cc86c40b795057bb301c07245bbc11f0 not found: ID does not exist" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.790657 4688 scope.go:117] "RemoveContainer" containerID="2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682" Nov 25 13:07:07 crc kubenswrapper[4688]: E1125 13:07:07.791902 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682\": container with ID starting with 2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682 not found: ID does not exist" containerID="2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.791927 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682"} err="failed to get container status \"2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682\": rpc error: code = NotFound desc = could not find container \"2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682\": container with ID starting with 2776b121111333a4823f79e992d4a074b020012f01d36ccf6724d1bdaf907682 not found: ID does not exist" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.791947 4688 scope.go:117] "RemoveContainer" containerID="cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8" Nov 25 13:07:07 crc kubenswrapper[4688]: E1125 13:07:07.792177 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8\": container with ID starting with cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8 not found: ID does not exist" containerID="cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.792198 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8"} err="failed to get container status \"cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8\": rpc error: code = NotFound desc = could not find container \"cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8\": container with ID starting with cc1ac9c2d4457936a457f91e784bdfe83256e29136a6f5abac9c469c595724e8 not found: ID does not exist" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.825194 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config-secret\") pod \"c22633a0-aeed-4e1d-a178-05b245f91b77\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.825343 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zjwx\" (UniqueName: \"kubernetes.io/projected/c22633a0-aeed-4e1d-a178-05b245f91b77-kube-api-access-5zjwx\") pod \"c22633a0-aeed-4e1d-a178-05b245f91b77\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.825408 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config\") pod \"c22633a0-aeed-4e1d-a178-05b245f91b77\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.825435 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-combined-ca-bundle\") pod \"c22633a0-aeed-4e1d-a178-05b245f91b77\" (UID: \"c22633a0-aeed-4e1d-a178-05b245f91b77\") " Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826126 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/650db49e-f2cf-4a21-96e0-1eed7d15d13e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826155 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826207 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826332 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826430 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826453 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826478 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826513 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826577 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826632 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.826720 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7v2\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-kube-api-access-8x7v2\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.829771 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22633a0-aeed-4e1d-a178-05b245f91b77-kube-api-access-5zjwx" (OuterVolumeSpecName: "kube-api-access-5zjwx") pod "c22633a0-aeed-4e1d-a178-05b245f91b77" (UID: "c22633a0-aeed-4e1d-a178-05b245f91b77"). InnerVolumeSpecName "kube-api-access-5zjwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.856853 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c22633a0-aeed-4e1d-a178-05b245f91b77" (UID: "c22633a0-aeed-4e1d-a178-05b245f91b77"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.862724 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c22633a0-aeed-4e1d-a178-05b245f91b77" (UID: "c22633a0-aeed-4e1d-a178-05b245f91b77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.876648 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c22633a0-aeed-4e1d-a178-05b245f91b77" (UID: "c22633a0-aeed-4e1d-a178-05b245f91b77"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.928894 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929153 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929181 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929205 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929236 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929268 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929308 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7v2\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-kube-api-access-8x7v2\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929383 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/650db49e-f2cf-4a21-96e0-1eed7d15d13e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929401 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929434 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929462 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929542 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929553 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929563 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c22633a0-aeed-4e1d-a178-05b245f91b77-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.929574 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zjwx\" (UniqueName: \"kubernetes.io/projected/c22633a0-aeed-4e1d-a178-05b245f91b77-kube-api-access-5zjwx\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.930761 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/650db49e-f2cf-4a21-96e0-1eed7d15d13e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.932166 4688 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.932211 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/04c24bb4fab9fd71e16796221da4d155d5418a2a713de67cbe514390cb74442c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.933195 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.933449 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.934242 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.935342 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.936231 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.936261 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.939972 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.942459 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.954220 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7v2\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-kube-api-access-8x7v2\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:07 crc kubenswrapper[4688]: I1125 13:07:07.992586 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"prometheus-metric-storage-0\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:08 crc kubenswrapper[4688]: I1125 13:07:08.085231 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:08 crc kubenswrapper[4688]: I1125 13:07:08.545175 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:07:08 crc kubenswrapper[4688]: W1125 13:07:08.551056 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod650db49e_f2cf_4a21_96e0_1eed7d15d13e.slice/crio-36b1f44d2e0e2b76a5185bcbcdc6228812193f2368f22abbbdc5bfb59757c35f WatchSource:0}: Error finding container 36b1f44d2e0e2b76a5185bcbcdc6228812193f2368f22abbbdc5bfb59757c35f: Status 404 returned error can't find the container with id 36b1f44d2e0e2b76a5185bcbcdc6228812193f2368f22abbbdc5bfb59757c35f Nov 25 13:07:08 crc kubenswrapper[4688]: I1125 13:07:08.670805 4688 scope.go:117] "RemoveContainer" containerID="c1efd3478df99f65c6a797aea85e0e1dd8346805300a3e9a19c89c410fa6a14f" Nov 25 13:07:08 crc kubenswrapper[4688]: I1125 13:07:08.670969 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 13:07:08 crc kubenswrapper[4688]: I1125 13:07:08.680901 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerStarted","Data":"36b1f44d2e0e2b76a5185bcbcdc6228812193f2368f22abbbdc5bfb59757c35f"} Nov 25 13:07:08 crc kubenswrapper[4688]: I1125 13:07:08.691332 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c22633a0-aeed-4e1d-a178-05b245f91b77" podUID="3ecf7482-aefd-4e71-a856-9818296c91e7" Nov 25 13:07:08 crc kubenswrapper[4688]: I1125 13:07:08.754672 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d7af41-d7e2-4fd5-a5a0-59fa424ced37" path="/var/lib/kubelet/pods/30d7af41-d7e2-4fd5-a5a0-59fa424ced37/volumes" Nov 25 13:07:08 crc kubenswrapper[4688]: I1125 13:07:08.755786 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22633a0-aeed-4e1d-a178-05b245f91b77" path="/var/lib/kubelet/pods/c22633a0-aeed-4e1d-a178-05b245f91b77/volumes" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.473839 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.567254 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-internal-tls-certs\") pod \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.567335 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdb4p\" (UniqueName: \"kubernetes.io/projected/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-kube-api-access-tdb4p\") pod \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.567364 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-combined-ca-bundle\") pod \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.567458 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-scripts\") pod \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.567492 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-public-tls-certs\") pod \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.567594 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-config-data\") pod \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.573661 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-scripts" (OuterVolumeSpecName: "scripts") pod "05953b8a-db86-4a1d-90e4-7cfa93fb87e3" (UID: "05953b8a-db86-4a1d-90e4-7cfa93fb87e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.583737 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-kube-api-access-tdb4p" (OuterVolumeSpecName: "kube-api-access-tdb4p") pod "05953b8a-db86-4a1d-90e4-7cfa93fb87e3" (UID: "05953b8a-db86-4a1d-90e4-7cfa93fb87e3"). InnerVolumeSpecName "kube-api-access-tdb4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.638357 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "05953b8a-db86-4a1d-90e4-7cfa93fb87e3" (UID: "05953b8a-db86-4a1d-90e4-7cfa93fb87e3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.675900 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdb4p\" (UniqueName: \"kubernetes.io/projected/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-kube-api-access-tdb4p\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.675931 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.675941 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.694458 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05953b8a-db86-4a1d-90e4-7cfa93fb87e3" (UID: "05953b8a-db86-4a1d-90e4-7cfa93fb87e3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.746577 4688 generic.go:334] "Generic (PLEG): container finished" podID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerID="5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd" exitCode=0 Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.746640 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerDied","Data":"5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd"} Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.746665 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05953b8a-db86-4a1d-90e4-7cfa93fb87e3","Type":"ContainerDied","Data":"499f0b035f98ade1e20d8f923b29f4ad58b70103c117be197cf820e7c2b0590c"} Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.746681 4688 scope.go:117] "RemoveContainer" containerID="5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.746849 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.770079 4688 scope.go:117] "RemoveContainer" containerID="41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.779729 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.793298 4688 scope.go:117] "RemoveContainer" containerID="3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.797426 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-config-data" (OuterVolumeSpecName: "config-data") pod "05953b8a-db86-4a1d-90e4-7cfa93fb87e3" (UID: "05953b8a-db86-4a1d-90e4-7cfa93fb87e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.880325 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05953b8a-db86-4a1d-90e4-7cfa93fb87e3" (UID: "05953b8a-db86-4a1d-90e4-7cfa93fb87e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.880460 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-combined-ca-bundle\") pod \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\" (UID: \"05953b8a-db86-4a1d-90e4-7cfa93fb87e3\") " Nov 25 13:07:09 crc kubenswrapper[4688]: W1125 13:07:09.881311 4688 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/05953b8a-db86-4a1d-90e4-7cfa93fb87e3/volumes/kubernetes.io~secret/combined-ca-bundle Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.881393 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.881409 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05953b8a-db86-4a1d-90e4-7cfa93fb87e3" (UID: "05953b8a-db86-4a1d-90e4-7cfa93fb87e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.884799 4688 scope.go:117] "RemoveContainer" containerID="2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.920777 4688 scope.go:117] "RemoveContainer" containerID="5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd" Nov 25 13:07:09 crc kubenswrapper[4688]: E1125 13:07:09.921443 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd\": container with ID starting with 5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd not found: ID does not exist" containerID="5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.921476 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd"} err="failed to get container status \"5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd\": rpc error: code = NotFound desc = could not find container \"5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd\": container with ID starting with 5d0a2f2b68fdbc4cea0d5e2692e9dcf76d2e3f2aa9e1d83a2bf8ca468f8761fd not found: ID does not exist" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.921503 4688 scope.go:117] "RemoveContainer" containerID="41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a" Nov 25 13:07:09 crc kubenswrapper[4688]: E1125 13:07:09.921792 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a\": container with ID starting with 41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a not found: ID does not exist" containerID="41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.921813 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a"} err="failed to get container status \"41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a\": rpc error: code = NotFound desc = could not find container \"41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a\": container with ID starting with 41cc7046be1be1e2ce71e6c328d51078fed04978790a8f746901e13a2348199a not found: ID does not exist" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.921830 4688 scope.go:117] "RemoveContainer" containerID="3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680" Nov 25 13:07:09 crc kubenswrapper[4688]: E1125 13:07:09.922039 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680\": container with ID starting with 3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680 not found: ID does not exist" containerID="3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.922061 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680"} err="failed to get container status \"3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680\": rpc error: code = NotFound desc = could not find container \"3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680\": container with ID starting with 3306d0f15028ede4d09243e0ce1404be0448730fec956be442fc219d74f72680 not found: ID does not exist" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.922078 4688 scope.go:117] "RemoveContainer" containerID="2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e" Nov 25 13:07:09 crc kubenswrapper[4688]: E1125 13:07:09.922326 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e\": container with ID starting with 2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e not found: ID does not exist" containerID="2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.922360 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e"} err="failed to get container status \"2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e\": rpc error: code = NotFound desc = could not find container \"2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e\": container with ID starting with 2d4ff0f75f4c16899ddaa21fa48cc87ff7e3b46232484d9445c6cbfcee98b86e not found: ID does not exist" Nov 25 13:07:09 crc kubenswrapper[4688]: I1125 13:07:09.984293 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05953b8a-db86-4a1d-90e4-7cfa93fb87e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.078353 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.089510 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.103920 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 13:07:10 crc kubenswrapper[4688]: E1125 13:07:10.104320 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-evaluator" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.104338 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-evaluator" Nov 25 13:07:10 crc kubenswrapper[4688]: E1125 13:07:10.104356 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-listener" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.104362 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-listener" Nov 25 13:07:10 crc kubenswrapper[4688]: E1125 13:07:10.104381 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-api" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.104387 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-api" Nov 25 13:07:10 crc kubenswrapper[4688]: E1125 13:07:10.104417 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-notifier" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.104423 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-notifier" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.104662 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-listener" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.104688 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-evaluator" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.104696 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-notifier" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.104711 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" containerName="aodh-api" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.107574 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.110617 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.110766 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.111087 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.111502 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tn6ss" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.111662 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.126127 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.187643 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-config-data\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.187856 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-scripts\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.187888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-internal-tls-certs\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.188071 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47tc\" (UniqueName: \"kubernetes.io/projected/7ac0877c-91d4-40af-88c9-31fc8cb74e86-kube-api-access-f47tc\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.188189 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.188214 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-public-tls-certs\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.290018 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-scripts\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.290091 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-internal-tls-certs\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.290226 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47tc\" (UniqueName: \"kubernetes.io/projected/7ac0877c-91d4-40af-88c9-31fc8cb74e86-kube-api-access-f47tc\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.290341 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.290364 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-public-tls-certs\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.290448 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-config-data\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.294793 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-internal-tls-certs\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.294825 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-scripts\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.295656 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.296452 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-config-data\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.300304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-public-tls-certs\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.312434 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47tc\" (UniqueName: \"kubernetes.io/projected/7ac0877c-91d4-40af-88c9-31fc8cb74e86-kube-api-access-f47tc\") pod \"aodh-0\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.422559 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.760460 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05953b8a-db86-4a1d-90e4-7cfa93fb87e3" path="/var/lib/kubelet/pods/05953b8a-db86-4a1d-90e4-7cfa93fb87e3/volumes" Nov 25 13:07:10 crc kubenswrapper[4688]: I1125 13:07:10.971233 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:07:10 crc kubenswrapper[4688]: W1125 13:07:10.979796 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac0877c_91d4_40af_88c9_31fc8cb74e86.slice/crio-848edcfe595a30eb7dcc18e4c628c27c4025accee09fcc8013039bf833b616fa WatchSource:0}: Error finding container 848edcfe595a30eb7dcc18e4c628c27c4025accee09fcc8013039bf833b616fa: Status 404 returned error can't find the container with id 848edcfe595a30eb7dcc18e4c628c27c4025accee09fcc8013039bf833b616fa Nov 25 13:07:11 crc kubenswrapper[4688]: I1125 13:07:11.788407 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerStarted","Data":"848edcfe595a30eb7dcc18e4c628c27c4025accee09fcc8013039bf833b616fa"} Nov 25 13:07:11 crc kubenswrapper[4688]: I1125 13:07:11.790403 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerStarted","Data":"77898042efe95bd7eccf59a937fcbafbb36c5c138dfc85e62d1d2c8806c4e2f6"} Nov 25 13:07:12 crc kubenswrapper[4688]: I1125 13:07:12.803411 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerStarted","Data":"d5621bea1f9d451092e82935eea1770b8fd5a09af7ae0472cc6bf476c72863c3"} Nov 25 13:07:12 crc kubenswrapper[4688]: I1125 13:07:12.803752 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerStarted","Data":"228f75c3f65f645306da970311e5d0763e8f3165c8a494b6c2e2cdf426888623"} Nov 25 13:07:14 crc kubenswrapper[4688]: I1125 13:07:14.823561 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerStarted","Data":"b3b6e7721e02eee97927ce6b63493df8eb9d1f8f2b6434fa5bb68a426eaeb6ec"} Nov 25 13:07:15 crc kubenswrapper[4688]: I1125 13:07:15.838758 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerStarted","Data":"4f383a34cbfed8388362ed7920cd00848ed680e173128b04174a69ef6c261b48"} Nov 25 13:07:16 crc kubenswrapper[4688]: I1125 13:07:16.745699 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:07:16 crc kubenswrapper[4688]: E1125 13:07:16.746600 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:07:18 crc kubenswrapper[4688]: I1125 13:07:18.869747 4688 generic.go:334] "Generic (PLEG): container finished" podID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerID="77898042efe95bd7eccf59a937fcbafbb36c5c138dfc85e62d1d2c8806c4e2f6" exitCode=0 Nov 25 13:07:18 crc kubenswrapper[4688]: I1125 13:07:18.869799 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerDied","Data":"77898042efe95bd7eccf59a937fcbafbb36c5c138dfc85e62d1d2c8806c4e2f6"} Nov 25 13:07:18 crc kubenswrapper[4688]: I1125 13:07:18.905974 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=4.7694622540000005 podStartE2EDuration="8.905955088s" podCreationTimestamp="2025-11-25 13:07:10 +0000 UTC" firstStartedPulling="2025-11-25 13:07:10.981797378 +0000 UTC m=+3181.091426246" lastFinishedPulling="2025-11-25 13:07:15.118290212 +0000 UTC m=+3185.227919080" observedRunningTime="2025-11-25 13:07:15.857569313 +0000 UTC m=+3185.967198181" watchObservedRunningTime="2025-11-25 13:07:18.905955088 +0000 UTC m=+3189.015583976" Nov 25 13:07:20 crc kubenswrapper[4688]: I1125 13:07:20.894639 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerStarted","Data":"b8f4be1d4a8ba00c642684e932d69af36f43746d2762f869c541c9ef846b3584"} Nov 25 13:07:25 crc kubenswrapper[4688]: I1125 13:07:25.950053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerStarted","Data":"aa4a42469e07f7d8df47024f7273a63df40eaab753751c7286b89fddca2444b9"} Nov 25 13:07:25 crc kubenswrapper[4688]: I1125 13:07:25.950353 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerStarted","Data":"adefe1fa6b527b9fbc0de76d0a8373d7887a9f904e0cb5d3032bb1b88e5cf412"} Nov 25 13:07:25 crc kubenswrapper[4688]: I1125 13:07:25.987624 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.987602184 podStartE2EDuration="18.987602184s" podCreationTimestamp="2025-11-25 13:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:07:25.975310491 +0000 UTC m=+3196.084939369" watchObservedRunningTime="2025-11-25 13:07:25.987602184 +0000 UTC m=+3196.097231062" Nov 25 13:07:28 crc kubenswrapper[4688]: I1125 13:07:28.085814 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:28 crc kubenswrapper[4688]: I1125 13:07:28.741016 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:07:28 crc kubenswrapper[4688]: E1125 13:07:28.742152 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:07:38 crc kubenswrapper[4688]: I1125 13:07:38.085719 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:38 crc kubenswrapper[4688]: I1125 13:07:38.093635 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:39 crc kubenswrapper[4688]: I1125 13:07:39.113037 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 13:07:43 crc kubenswrapper[4688]: I1125 13:07:43.739441 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:07:43 crc kubenswrapper[4688]: E1125 13:07:43.740113 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:07:58 crc kubenswrapper[4688]: I1125 13:07:58.740348 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:07:58 crc kubenswrapper[4688]: E1125 13:07:58.741385 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:08:10 crc kubenswrapper[4688]: I1125 13:08:10.749221 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:08:10 crc kubenswrapper[4688]: E1125 13:08:10.750045 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:08:25 crc kubenswrapper[4688]: I1125 13:08:25.740238 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:08:25 crc kubenswrapper[4688]: E1125 13:08:25.741094 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:08:40 crc kubenswrapper[4688]: I1125 13:08:40.753469 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:08:40 crc kubenswrapper[4688]: E1125 13:08:40.754387 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:08:52 crc kubenswrapper[4688]: I1125 13:08:52.739785 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:08:52 crc kubenswrapper[4688]: E1125 13:08:52.741117 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:08:52 crc kubenswrapper[4688]: I1125 13:08:52.880820 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5dcbb6d5d7-bpx7g" podUID="2c3f8ead-c9ee-4ce5-923a-558a17e1f688" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 25 13:09:06 crc kubenswrapper[4688]: I1125 13:09:06.740930 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:09:06 crc kubenswrapper[4688]: E1125 13:09:06.742156 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:09:20 crc kubenswrapper[4688]: I1125 13:09:20.749255 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:09:20 crc kubenswrapper[4688]: E1125 13:09:20.750327 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:09:31 crc kubenswrapper[4688]: I1125 13:09:31.740012 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:09:31 crc kubenswrapper[4688]: E1125 13:09:31.741072 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:09:44 crc kubenswrapper[4688]: I1125 13:09:44.739760 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:09:44 crc kubenswrapper[4688]: E1125 13:09:44.740596 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:09:50 crc kubenswrapper[4688]: I1125 13:09:50.586028 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/2.log" Nov 25 13:09:52 crc kubenswrapper[4688]: I1125 13:09:52.595182 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:09:52 crc kubenswrapper[4688]: I1125 13:09:52.595939 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="prometheus" containerID="cri-o://b8f4be1d4a8ba00c642684e932d69af36f43746d2762f869c541c9ef846b3584" gracePeriod=600 Nov 25 13:09:52 crc kubenswrapper[4688]: I1125 13:09:52.596434 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="thanos-sidecar" containerID="cri-o://aa4a42469e07f7d8df47024f7273a63df40eaab753751c7286b89fddca2444b9" gracePeriod=600 Nov 25 13:09:52 crc kubenswrapper[4688]: I1125 13:09:52.596480 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="config-reloader" containerID="cri-o://adefe1fa6b527b9fbc0de76d0a8373d7887a9f904e0cb5d3032bb1b88e5cf412" gracePeriod=600 Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.085902 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.17:9090/-/ready\": dial tcp 10.217.1.17:9090: connect: connection refused" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.482718 4688 generic.go:334] "Generic (PLEG): container finished" podID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerID="aa4a42469e07f7d8df47024f7273a63df40eaab753751c7286b89fddca2444b9" exitCode=0 Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.482753 4688 generic.go:334] "Generic (PLEG): container finished" podID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerID="adefe1fa6b527b9fbc0de76d0a8373d7887a9f904e0cb5d3032bb1b88e5cf412" exitCode=0 Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.482762 4688 generic.go:334] "Generic (PLEG): container finished" podID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerID="b8f4be1d4a8ba00c642684e932d69af36f43746d2762f869c541c9ef846b3584" exitCode=0 Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.482789 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerDied","Data":"aa4a42469e07f7d8df47024f7273a63df40eaab753751c7286b89fddca2444b9"} Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.482821 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerDied","Data":"adefe1fa6b527b9fbc0de76d0a8373d7887a9f904e0cb5d3032bb1b88e5cf412"} Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.482837 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerDied","Data":"b8f4be1d4a8ba00c642684e932d69af36f43746d2762f869c541c9ef846b3584"} Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.680405 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.825818 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.825924 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config-out\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826013 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-tls-assets\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826044 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826074 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/650db49e-f2cf-4a21-96e0-1eed7d15d13e-prometheus-metric-storage-rulefiles-0\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826190 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826237 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-thanos-prometheus-http-client-file\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826300 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826365 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x7v2\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-kube-api-access-8x7v2\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826401 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-secret-combined-ca-bundle\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.826475 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config\") pod \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\" (UID: \"650db49e-f2cf-4a21-96e0-1eed7d15d13e\") " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.831675 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650db49e-f2cf-4a21-96e0-1eed7d15d13e-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.835449 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config-out" (OuterVolumeSpecName: "config-out") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.835828 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.836807 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config" (OuterVolumeSpecName: "config") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.837847 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.837999 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.838690 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-kube-api-access-8x7v2" (OuterVolumeSpecName: "kube-api-access-8x7v2") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "kube-api-access-8x7v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.839486 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.853602 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.884184 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928598 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x7v2\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-kube-api-access-8x7v2\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928634 4688 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928669 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") on node \"crc\" " Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928682 4688 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config-out\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928702 4688 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/650db49e-f2cf-4a21-96e0-1eed7d15d13e-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928714 4688 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928726 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/650db49e-f2cf-4a21-96e0-1eed7d15d13e-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928741 4688 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928754 4688 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.928764 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-config\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.956850 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config" (OuterVolumeSpecName: "web-config") pod "650db49e-f2cf-4a21-96e0-1eed7d15d13e" (UID: "650db49e-f2cf-4a21-96e0-1eed7d15d13e"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.966663 4688 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 13:09:53 crc kubenswrapper[4688]: I1125 13:09:53.966819 4688 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c") on node "crc" Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.030776 4688 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/650db49e-f2cf-4a21-96e0-1eed7d15d13e-web-config\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.030818 4688 reconciler_common.go:293] "Volume detached for volume \"pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77fb49ec-476e-4b9c-bc1e-6ff793fcde0c\") on node \"crc\" DevicePath \"\"" Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.494705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"650db49e-f2cf-4a21-96e0-1eed7d15d13e","Type":"ContainerDied","Data":"36b1f44d2e0e2b76a5185bcbcdc6228812193f2368f22abbbdc5bfb59757c35f"} Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.494766 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.494763 4688 scope.go:117] "RemoveContainer" containerID="aa4a42469e07f7d8df47024f7273a63df40eaab753751c7286b89fddca2444b9" Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.540501 4688 scope.go:117] "RemoveContainer" containerID="adefe1fa6b527b9fbc0de76d0a8373d7887a9f904e0cb5d3032bb1b88e5cf412" Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.557314 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.570249 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.570879 4688 scope.go:117] "RemoveContainer" containerID="b8f4be1d4a8ba00c642684e932d69af36f43746d2762f869c541c9ef846b3584" Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.599777 4688 scope.go:117] "RemoveContainer" containerID="77898042efe95bd7eccf59a937fcbafbb36c5c138dfc85e62d1d2c8806c4e2f6" Nov 25 13:09:54 crc kubenswrapper[4688]: I1125 13:09:54.750354 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" path="/var/lib/kubelet/pods/650db49e-f2cf-4a21-96e0-1eed7d15d13e/volumes" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.229542 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:09:55 crc kubenswrapper[4688]: E1125 13:09:55.229924 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="thanos-sidecar" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.229941 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="thanos-sidecar" Nov 25 13:09:55 crc kubenswrapper[4688]: E1125 13:09:55.229956 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="init-config-reloader" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.229962 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="init-config-reloader" Nov 25 13:09:55 crc kubenswrapper[4688]: E1125 13:09:55.230006 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="config-reloader" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.230018 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="config-reloader" Nov 25 13:09:55 crc kubenswrapper[4688]: E1125 13:09:55.230044 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="prometheus" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.230050 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="prometheus" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.230209 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="thanos-sidecar" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.230218 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="config-reloader" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.230238 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="650db49e-f2cf-4a21-96e0-1eed7d15d13e" containerName="prometheus" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.231962 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.235872 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.235872 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.236897 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.236937 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.237205 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.240074 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-g9v2p" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.262095 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.264039 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.359758 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.359839 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhtzl\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-kube-api-access-zhtzl\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.359871 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.359910 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.359941 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.359989 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.360029 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.360067 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.360155 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.360198 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.360299 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.461869 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.461916 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.461942 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.461997 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.462026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.462079 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.462110 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.462139 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.462158 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhtzl\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-kube-api-access-zhtzl\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.462183 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.462202 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.463103 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.466755 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.469383 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.469493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.469737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.471122 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.472955 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.474859 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.487651 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.487904 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.489924 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhtzl\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-kube-api-access-zhtzl\") pod \"prometheus-metric-storage-0\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:55 crc kubenswrapper[4688]: I1125 13:09:55.548803 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:09:56 crc kubenswrapper[4688]: I1125 13:09:56.054218 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:09:56 crc kubenswrapper[4688]: W1125 13:09:56.061935 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a48f058_02c6_4e26_bbd1_ad023e9ce69a.slice/crio-934c87181036659f67715a468c2643c77b6002e5f1b45a449ae761e4b944ab56 WatchSource:0}: Error finding container 934c87181036659f67715a468c2643c77b6002e5f1b45a449ae761e4b944ab56: Status 404 returned error can't find the container with id 934c87181036659f67715a468c2643c77b6002e5f1b45a449ae761e4b944ab56 Nov 25 13:09:56 crc kubenswrapper[4688]: I1125 13:09:56.515863 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerStarted","Data":"934c87181036659f67715a468c2643c77b6002e5f1b45a449ae761e4b944ab56"} Nov 25 13:09:59 crc kubenswrapper[4688]: I1125 13:09:59.545802 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerStarted","Data":"d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766"} Nov 25 13:09:59 crc kubenswrapper[4688]: I1125 13:09:59.740803 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:09:59 crc kubenswrapper[4688]: E1125 13:09:59.741029 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:10:06 crc kubenswrapper[4688]: I1125 13:10:06.642011 4688 generic.go:334] "Generic (PLEG): container finished" podID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerID="d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766" exitCode=0 Nov 25 13:10:06 crc kubenswrapper[4688]: I1125 13:10:06.642086 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerDied","Data":"d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766"} Nov 25 13:10:07 crc kubenswrapper[4688]: I1125 13:10:07.669854 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerStarted","Data":"32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647"} Nov 25 13:10:10 crc kubenswrapper[4688]: I1125 13:10:10.698092 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerStarted","Data":"132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9"} Nov 25 13:10:10 crc kubenswrapper[4688]: I1125 13:10:10.698765 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerStarted","Data":"39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a"} Nov 25 13:10:10 crc kubenswrapper[4688]: I1125 13:10:10.724192 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.724172277 podStartE2EDuration="15.724172277s" podCreationTimestamp="2025-11-25 13:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:10:10.72170475 +0000 UTC m=+3360.831333628" watchObservedRunningTime="2025-11-25 13:10:10.724172277 +0000 UTC m=+3360.833801145" Nov 25 13:10:14 crc kubenswrapper[4688]: I1125 13:10:14.740500 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:10:14 crc kubenswrapper[4688]: E1125 13:10:14.741420 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:10:15 crc kubenswrapper[4688]: I1125 13:10:15.549874 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 13:10:25 crc kubenswrapper[4688]: I1125 13:10:25.549456 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 13:10:25 crc kubenswrapper[4688]: I1125 13:10:25.556955 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 13:10:25 crc kubenswrapper[4688]: I1125 13:10:25.839470 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 13:10:27 crc kubenswrapper[4688]: I1125 13:10:27.740131 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:10:27 crc kubenswrapper[4688]: E1125 13:10:27.740718 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:10:41 crc kubenswrapper[4688]: I1125 13:10:41.740150 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:10:41 crc kubenswrapper[4688]: E1125 13:10:41.741855 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:10:53 crc kubenswrapper[4688]: I1125 13:10:53.739905 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:10:54 crc kubenswrapper[4688]: I1125 13:10:54.146566 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"7ad0523d7f01b540add0ee302083034de4611d277d3f73ffc112bf68fb001400"} Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.009193 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bz9xr"] Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.012240 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.026653 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz9xr"] Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.126131 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-utilities\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.126214 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-catalog-content\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.126278 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzr9w\" (UniqueName: \"kubernetes.io/projected/20bb4fec-ab28-4413-a4fd-f16846eb3abc-kube-api-access-rzr9w\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.228350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-utilities\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.228418 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-catalog-content\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.228459 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzr9w\" (UniqueName: \"kubernetes.io/projected/20bb4fec-ab28-4413-a4fd-f16846eb3abc-kube-api-access-rzr9w\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.229052 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-catalog-content\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.229277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-utilities\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.251169 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzr9w\" (UniqueName: \"kubernetes.io/projected/20bb4fec-ab28-4413-a4fd-f16846eb3abc-kube-api-access-rzr9w\") pod \"community-operators-bz9xr\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.338302 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:04 crc kubenswrapper[4688]: I1125 13:11:04.917816 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz9xr"] Nov 25 13:11:05 crc kubenswrapper[4688]: I1125 13:11:05.269247 4688 generic.go:334] "Generic (PLEG): container finished" podID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerID="7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4" exitCode=0 Nov 25 13:11:05 crc kubenswrapper[4688]: I1125 13:11:05.269318 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz9xr" event={"ID":"20bb4fec-ab28-4413-a4fd-f16846eb3abc","Type":"ContainerDied","Data":"7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4"} Nov 25 13:11:05 crc kubenswrapper[4688]: I1125 13:11:05.269352 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz9xr" event={"ID":"20bb4fec-ab28-4413-a4fd-f16846eb3abc","Type":"ContainerStarted","Data":"d38f898505d7a4bfca786e6f945e1589c801e9c01a029c6237906bfce2df54ce"} Nov 25 13:11:07 crc kubenswrapper[4688]: I1125 13:11:07.296631 4688 generic.go:334] "Generic (PLEG): container finished" podID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerID="a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73" exitCode=0 Nov 25 13:11:07 crc kubenswrapper[4688]: I1125 13:11:07.296724 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz9xr" event={"ID":"20bb4fec-ab28-4413-a4fd-f16846eb3abc","Type":"ContainerDied","Data":"a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73"} Nov 25 13:11:08 crc kubenswrapper[4688]: I1125 13:11:08.042734 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-gkjms"] Nov 25 13:11:08 crc kubenswrapper[4688]: I1125 13:11:08.049205 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-gkjms"] Nov 25 13:11:08 crc kubenswrapper[4688]: I1125 13:11:08.094282 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1f34-account-create-zgm8c"] Nov 25 13:11:08 crc kubenswrapper[4688]: I1125 13:11:08.106680 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1f34-account-create-zgm8c"] Nov 25 13:11:08 crc kubenswrapper[4688]: I1125 13:11:08.768666 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb311ca-6585-4cc6-8b68-23755a207433" path="/var/lib/kubelet/pods/9bb311ca-6585-4cc6-8b68-23755a207433/volumes" Nov 25 13:11:08 crc kubenswrapper[4688]: I1125 13:11:08.835131 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61628bc-7d4e-4548-9770-6f764e6b89d7" path="/var/lib/kubelet/pods/d61628bc-7d4e-4548-9770-6f764e6b89d7/volumes" Nov 25 13:11:09 crc kubenswrapper[4688]: I1125 13:11:09.346263 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz9xr" event={"ID":"20bb4fec-ab28-4413-a4fd-f16846eb3abc","Type":"ContainerStarted","Data":"e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355"} Nov 25 13:11:09 crc kubenswrapper[4688]: I1125 13:11:09.375767 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bz9xr" podStartSLOduration=2.59729492 podStartE2EDuration="6.375741797s" podCreationTimestamp="2025-11-25 13:11:03 +0000 UTC" firstStartedPulling="2025-11-25 13:11:05.274779337 +0000 UTC m=+3415.384408225" lastFinishedPulling="2025-11-25 13:11:09.053226224 +0000 UTC m=+3419.162855102" observedRunningTime="2025-11-25 13:11:09.363362437 +0000 UTC m=+3419.472991325" watchObservedRunningTime="2025-11-25 13:11:09.375741797 +0000 UTC m=+3419.485370655" Nov 25 13:11:14 crc kubenswrapper[4688]: I1125 13:11:14.338659 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:14 crc kubenswrapper[4688]: I1125 13:11:14.339693 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:14 crc kubenswrapper[4688]: I1125 13:11:14.391008 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:14 crc kubenswrapper[4688]: I1125 13:11:14.446575 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:14 crc kubenswrapper[4688]: I1125 13:11:14.628671 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz9xr"] Nov 25 13:11:16 crc kubenswrapper[4688]: I1125 13:11:16.417053 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bz9xr" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerName="registry-server" containerID="cri-o://e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355" gracePeriod=2 Nov 25 13:11:16 crc kubenswrapper[4688]: I1125 13:11:16.830391 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:16 crc kubenswrapper[4688]: I1125 13:11:16.992218 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzr9w\" (UniqueName: \"kubernetes.io/projected/20bb4fec-ab28-4413-a4fd-f16846eb3abc-kube-api-access-rzr9w\") pod \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " Nov 25 13:11:16 crc kubenswrapper[4688]: I1125 13:11:16.992997 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-utilities\") pod \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " Nov 25 13:11:16 crc kubenswrapper[4688]: I1125 13:11:16.993309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-catalog-content\") pod \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\" (UID: \"20bb4fec-ab28-4413-a4fd-f16846eb3abc\") " Nov 25 13:11:16 crc kubenswrapper[4688]: I1125 13:11:16.994588 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-utilities" (OuterVolumeSpecName: "utilities") pod "20bb4fec-ab28-4413-a4fd-f16846eb3abc" (UID: "20bb4fec-ab28-4413-a4fd-f16846eb3abc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:11:16 crc kubenswrapper[4688]: I1125 13:11:16.995962 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.002816 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bb4fec-ab28-4413-a4fd-f16846eb3abc-kube-api-access-rzr9w" (OuterVolumeSpecName: "kube-api-access-rzr9w") pod "20bb4fec-ab28-4413-a4fd-f16846eb3abc" (UID: "20bb4fec-ab28-4413-a4fd-f16846eb3abc"). InnerVolumeSpecName "kube-api-access-rzr9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.062701 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20bb4fec-ab28-4413-a4fd-f16846eb3abc" (UID: "20bb4fec-ab28-4413-a4fd-f16846eb3abc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.098247 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bb4fec-ab28-4413-a4fd-f16846eb3abc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.098290 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzr9w\" (UniqueName: \"kubernetes.io/projected/20bb4fec-ab28-4413-a4fd-f16846eb3abc-kube-api-access-rzr9w\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.428543 4688 generic.go:334] "Generic (PLEG): container finished" podID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerID="e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355" exitCode=0 Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.428597 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz9xr" event={"ID":"20bb4fec-ab28-4413-a4fd-f16846eb3abc","Type":"ContainerDied","Data":"e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355"} Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.428644 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz9xr" event={"ID":"20bb4fec-ab28-4413-a4fd-f16846eb3abc","Type":"ContainerDied","Data":"d38f898505d7a4bfca786e6f945e1589c801e9c01a029c6237906bfce2df54ce"} Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.428672 4688 scope.go:117] "RemoveContainer" containerID="e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.428667 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz9xr" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.471871 4688 scope.go:117] "RemoveContainer" containerID="a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.472218 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz9xr"] Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.482387 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bz9xr"] Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.493875 4688 scope.go:117] "RemoveContainer" containerID="7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.549867 4688 scope.go:117] "RemoveContainer" containerID="e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355" Nov 25 13:11:17 crc kubenswrapper[4688]: E1125 13:11:17.550581 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355\": container with ID starting with e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355 not found: ID does not exist" containerID="e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.550701 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355"} err="failed to get container status \"e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355\": rpc error: code = NotFound desc = could not find container \"e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355\": container with ID starting with e04715d0a4ef16ae4a6ca28579ebca6e6307e82284d921ec9d89f65d11e06355 not found: ID does not exist" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.550801 4688 scope.go:117] "RemoveContainer" containerID="a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73" Nov 25 13:11:17 crc kubenswrapper[4688]: E1125 13:11:17.551159 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73\": container with ID starting with a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73 not found: ID does not exist" containerID="a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.551188 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73"} err="failed to get container status \"a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73\": rpc error: code = NotFound desc = could not find container \"a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73\": container with ID starting with a87507dff6cde35656e513d272a8b537ef5d566734362491edd96543d6c62e73 not found: ID does not exist" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.551457 4688 scope.go:117] "RemoveContainer" containerID="7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4" Nov 25 13:11:17 crc kubenswrapper[4688]: E1125 13:11:17.551825 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4\": container with ID starting with 7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4 not found: ID does not exist" containerID="7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4" Nov 25 13:11:17 crc kubenswrapper[4688]: I1125 13:11:17.551866 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4"} err="failed to get container status \"7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4\": rpc error: code = NotFound desc = could not find container \"7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4\": container with ID starting with 7a732f91e2362482b8a7a1a6e00546e1ba09c7a5875da467366757bccf3647a4 not found: ID does not exist" Nov 25 13:11:18 crc kubenswrapper[4688]: I1125 13:11:18.031574 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-pl44z"] Nov 25 13:11:18 crc kubenswrapper[4688]: I1125 13:11:18.043224 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-pl44z"] Nov 25 13:11:18 crc kubenswrapper[4688]: I1125 13:11:18.754182 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" path="/var/lib/kubelet/pods/20bb4fec-ab28-4413-a4fd-f16846eb3abc/volumes" Nov 25 13:11:18 crc kubenswrapper[4688]: I1125 13:11:18.755099 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c25d128-ae77-4b24-a447-c1a3ecd0d9bd" path="/var/lib/kubelet/pods/5c25d128-ae77-4b24-a447-c1a3ecd0d9bd/volumes" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.041402 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-49974"] Nov 25 13:11:34 crc kubenswrapper[4688]: E1125 13:11:34.042517 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerName="extract-content" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.042550 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerName="extract-content" Nov 25 13:11:34 crc kubenswrapper[4688]: E1125 13:11:34.042587 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerName="registry-server" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.042597 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerName="registry-server" Nov 25 13:11:34 crc kubenswrapper[4688]: E1125 13:11:34.042617 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerName="extract-utilities" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.042628 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerName="extract-utilities" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.042966 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bb4fec-ab28-4413-a4fd-f16846eb3abc" containerName="registry-server" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.044991 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.057587 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49974"] Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.070370 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqcnd\" (UniqueName: \"kubernetes.io/projected/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-kube-api-access-pqcnd\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.070687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-catalog-content\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.070884 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-utilities\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.173042 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-utilities\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.173164 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqcnd\" (UniqueName: \"kubernetes.io/projected/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-kube-api-access-pqcnd\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.173986 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-catalog-content\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.174180 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-utilities\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.174415 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-catalog-content\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.200707 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqcnd\" (UniqueName: \"kubernetes.io/projected/2c2211fa-01ae-43b9-9a3a-273b2e1d79ed-kube-api-access-pqcnd\") pod \"redhat-operators-49974\" (UID: \"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed\") " pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.366999 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.745742 4688 scope.go:117] "RemoveContainer" containerID="cd962b477a3b6dac04821d20211041570f585dbbef34a00cb719e68a1b8a8320" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.813669 4688 scope.go:117] "RemoveContainer" containerID="3558d08d969ac34da660840cce76beb6985f13c32155a0dfe79f485947ec9ad4" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.862334 4688 scope.go:117] "RemoveContainer" containerID="4d88f20743144536d0d8acb93a7fa9f20ce40b802bc34157f29b28071f6b4f6a" Nov 25 13:11:34 crc kubenswrapper[4688]: I1125 13:11:34.888309 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49974"] Nov 25 13:11:35 crc kubenswrapper[4688]: I1125 13:11:35.627334 4688 generic.go:334] "Generic (PLEG): container finished" podID="2c2211fa-01ae-43b9-9a3a-273b2e1d79ed" containerID="248a81fcad990256b2670cb20c19f8ff334486a1b3e3b79b9528a985ed674c61" exitCode=0 Nov 25 13:11:35 crc kubenswrapper[4688]: I1125 13:11:35.627441 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49974" event={"ID":"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed","Type":"ContainerDied","Data":"248a81fcad990256b2670cb20c19f8ff334486a1b3e3b79b9528a985ed674c61"} Nov 25 13:11:35 crc kubenswrapper[4688]: I1125 13:11:35.627588 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49974" event={"ID":"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed","Type":"ContainerStarted","Data":"9f035876f81301a938b02c2e85a0e0839dc4482491d1718a1e31a0ac19306dd0"} Nov 25 13:11:35 crc kubenswrapper[4688]: I1125 13:11:35.629328 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 13:11:52 crc kubenswrapper[4688]: I1125 13:11:52.686714 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/2.log" Nov 25 13:11:53 crc kubenswrapper[4688]: I1125 13:11:53.834972 4688 generic.go:334] "Generic (PLEG): container finished" podID="2c2211fa-01ae-43b9-9a3a-273b2e1d79ed" containerID="d22c7a7ac1009d83de52e430c0ce6114affb95a37809bff2a245a465ad78f583" exitCode=0 Nov 25 13:11:53 crc kubenswrapper[4688]: I1125 13:11:53.835048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49974" event={"ID":"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed","Type":"ContainerDied","Data":"d22c7a7ac1009d83de52e430c0ce6114affb95a37809bff2a245a465ad78f583"} Nov 25 13:11:54 crc kubenswrapper[4688]: I1125 13:11:54.086501 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 13:11:54 crc kubenswrapper[4688]: I1125 13:11:54.087146 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-api" containerID="cri-o://228f75c3f65f645306da970311e5d0763e8f3165c8a494b6c2e2cdf426888623" gracePeriod=30 Nov 25 13:11:54 crc kubenswrapper[4688]: I1125 13:11:54.087163 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-listener" containerID="cri-o://4f383a34cbfed8388362ed7920cd00848ed680e173128b04174a69ef6c261b48" gracePeriod=30 Nov 25 13:11:54 crc kubenswrapper[4688]: I1125 13:11:54.087208 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-notifier" containerID="cri-o://b3b6e7721e02eee97927ce6b63493df8eb9d1f8f2b6434fa5bb68a426eaeb6ec" gracePeriod=30 Nov 25 13:11:54 crc kubenswrapper[4688]: I1125 13:11:54.087671 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-evaluator" containerID="cri-o://d5621bea1f9d451092e82935eea1770b8fd5a09af7ae0472cc6bf476c72863c3" gracePeriod=30 Nov 25 13:11:54 crc kubenswrapper[4688]: I1125 13:11:54.848279 4688 generic.go:334] "Generic (PLEG): container finished" podID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerID="228f75c3f65f645306da970311e5d0763e8f3165c8a494b6c2e2cdf426888623" exitCode=0 Nov 25 13:11:54 crc kubenswrapper[4688]: I1125 13:11:54.848327 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerDied","Data":"228f75c3f65f645306da970311e5d0763e8f3165c8a494b6c2e2cdf426888623"} Nov 25 13:11:55 crc kubenswrapper[4688]: I1125 13:11:55.862342 4688 generic.go:334] "Generic (PLEG): container finished" podID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerID="d5621bea1f9d451092e82935eea1770b8fd5a09af7ae0472cc6bf476c72863c3" exitCode=0 Nov 25 13:11:55 crc kubenswrapper[4688]: I1125 13:11:55.862428 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerDied","Data":"d5621bea1f9d451092e82935eea1770b8fd5a09af7ae0472cc6bf476c72863c3"} Nov 25 13:11:56 crc kubenswrapper[4688]: I1125 13:11:56.884978 4688 generic.go:334] "Generic (PLEG): container finished" podID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerID="b3b6e7721e02eee97927ce6b63493df8eb9d1f8f2b6434fa5bb68a426eaeb6ec" exitCode=0 Nov 25 13:11:56 crc kubenswrapper[4688]: I1125 13:11:56.885030 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerDied","Data":"b3b6e7721e02eee97927ce6b63493df8eb9d1f8f2b6434fa5bb68a426eaeb6ec"} Nov 25 13:11:57 crc kubenswrapper[4688]: I1125 13:11:57.918438 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49974" event={"ID":"2c2211fa-01ae-43b9-9a3a-273b2e1d79ed","Type":"ContainerStarted","Data":"8d70eadb0cae89b3d1ee7fa1228e490299f6ec3dd347b776d916e601501173aa"} Nov 25 13:11:57 crc kubenswrapper[4688]: E1125 13:11:57.922658 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac0877c_91d4_40af_88c9_31fc8cb74e86.slice/crio-conmon-4f383a34cbfed8388362ed7920cd00848ed680e173128b04174a69ef6c261b48.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac0877c_91d4_40af_88c9_31fc8cb74e86.slice/crio-4f383a34cbfed8388362ed7920cd00848ed680e173128b04174a69ef6c261b48.scope\": RecentStats: unable to find data in memory cache]" Nov 25 13:11:57 crc kubenswrapper[4688]: I1125 13:11:57.927008 4688 generic.go:334] "Generic (PLEG): container finished" podID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerID="4f383a34cbfed8388362ed7920cd00848ed680e173128b04174a69ef6c261b48" exitCode=0 Nov 25 13:11:57 crc kubenswrapper[4688]: I1125 13:11:57.927107 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerDied","Data":"4f383a34cbfed8388362ed7920cd00848ed680e173128b04174a69ef6c261b48"} Nov 25 13:11:57 crc kubenswrapper[4688]: I1125 13:11:57.950909 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-49974" podStartSLOduration=2.771873524 podStartE2EDuration="23.950864237s" podCreationTimestamp="2025-11-25 13:11:34 +0000 UTC" firstStartedPulling="2025-11-25 13:11:35.629047022 +0000 UTC m=+3445.738675890" lastFinishedPulling="2025-11-25 13:11:56.808037745 +0000 UTC m=+3466.917666603" observedRunningTime="2025-11-25 13:11:57.941811615 +0000 UTC m=+3468.051440483" watchObservedRunningTime="2025-11-25 13:11:57.950864237 +0000 UTC m=+3468.060493105" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.522341 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.578816 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-internal-tls-certs\") pod \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.578892 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-public-tls-certs\") pod \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.578939 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f47tc\" (UniqueName: \"kubernetes.io/projected/7ac0877c-91d4-40af-88c9-31fc8cb74e86-kube-api-access-f47tc\") pod \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.578971 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-combined-ca-bundle\") pod \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.579161 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-scripts\") pod \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.579188 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-config-data\") pod \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\" (UID: \"7ac0877c-91d4-40af-88c9-31fc8cb74e86\") " Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.628048 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-scripts" (OuterVolumeSpecName: "scripts") pod "7ac0877c-91d4-40af-88c9-31fc8cb74e86" (UID: "7ac0877c-91d4-40af-88c9-31fc8cb74e86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.644456 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac0877c-91d4-40af-88c9-31fc8cb74e86-kube-api-access-f47tc" (OuterVolumeSpecName: "kube-api-access-f47tc") pod "7ac0877c-91d4-40af-88c9-31fc8cb74e86" (UID: "7ac0877c-91d4-40af-88c9-31fc8cb74e86"). InnerVolumeSpecName "kube-api-access-f47tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.684875 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.684935 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f47tc\" (UniqueName: \"kubernetes.io/projected/7ac0877c-91d4-40af-88c9-31fc8cb74e86-kube-api-access-f47tc\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.723955 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7ac0877c-91d4-40af-88c9-31fc8cb74e86" (UID: "7ac0877c-91d4-40af-88c9-31fc8cb74e86"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.738297 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7ac0877c-91d4-40af-88c9-31fc8cb74e86" (UID: "7ac0877c-91d4-40af-88c9-31fc8cb74e86"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.754740 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-config-data" (OuterVolumeSpecName: "config-data") pod "7ac0877c-91d4-40af-88c9-31fc8cb74e86" (UID: "7ac0877c-91d4-40af-88c9-31fc8cb74e86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.786993 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.787981 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.788819 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.795592 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ac0877c-91d4-40af-88c9-31fc8cb74e86" (UID: "7ac0877c-91d4-40af-88c9-31fc8cb74e86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.891818 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac0877c-91d4-40af-88c9-31fc8cb74e86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.944486 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.944573 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ac0877c-91d4-40af-88c9-31fc8cb74e86","Type":"ContainerDied","Data":"848edcfe595a30eb7dcc18e4c628c27c4025accee09fcc8013039bf833b616fa"} Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.944627 4688 scope.go:117] "RemoveContainer" containerID="4f383a34cbfed8388362ed7920cd00848ed680e173128b04174a69ef6c261b48" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.988738 4688 scope.go:117] "RemoveContainer" containerID="b3b6e7721e02eee97927ce6b63493df8eb9d1f8f2b6434fa5bb68a426eaeb6ec" Nov 25 13:11:58 crc kubenswrapper[4688]: I1125 13:11:58.990718 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.006342 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.014960 4688 scope.go:117] "RemoveContainer" containerID="d5621bea1f9d451092e82935eea1770b8fd5a09af7ae0472cc6bf476c72863c3" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.019966 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 13:11:59 crc kubenswrapper[4688]: E1125 13:11:59.020428 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-api" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.020448 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-api" Nov 25 13:11:59 crc kubenswrapper[4688]: E1125 13:11:59.020467 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-evaluator" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.020475 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-evaluator" Nov 25 13:11:59 crc kubenswrapper[4688]: E1125 13:11:59.020501 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-notifier" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.020512 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-notifier" Nov 25 13:11:59 crc kubenswrapper[4688]: E1125 13:11:59.023148 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-listener" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.023208 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-listener" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.023603 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-api" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.023633 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-listener" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.023649 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-notifier" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.023684 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" containerName="aodh-evaluator" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.025883 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.029057 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.033658 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.033839 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.034161 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-tn6ss" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.034670 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.040265 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.044646 4688 scope.go:117] "RemoveContainer" containerID="228f75c3f65f645306da970311e5d0763e8f3165c8a494b6c2e2cdf426888623" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.095552 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.095607 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-scripts\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.095633 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-config-data\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.095650 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-internal-tls-certs\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.095671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rllzc\" (UniqueName: \"kubernetes.io/projected/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-kube-api-access-rllzc\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.095728 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-public-tls-certs\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.198890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.198993 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-scripts\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.199019 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-config-data\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.199046 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-internal-tls-certs\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.199066 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rllzc\" (UniqueName: \"kubernetes.io/projected/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-kube-api-access-rllzc\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.199143 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-public-tls-certs\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.209245 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-scripts\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.209342 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.209497 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-internal-tls-certs\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.210921 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-config-data\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.211344 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-public-tls-certs\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.222836 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rllzc\" (UniqueName: \"kubernetes.io/projected/a0483ad2-006e-4eb4-aa60-9fe5c8eed056-kube-api-access-rllzc\") pod \"aodh-0\" (UID: \"a0483ad2-006e-4eb4-aa60-9fe5c8eed056\") " pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.353293 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.838844 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 13:11:59 crc kubenswrapper[4688]: W1125 13:11:59.848284 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0483ad2_006e_4eb4_aa60_9fe5c8eed056.slice/crio-1567a1859d907e2767d1badf57b83a955c42cb2b51151db12385086ca7840619 WatchSource:0}: Error finding container 1567a1859d907e2767d1badf57b83a955c42cb2b51151db12385086ca7840619: Status 404 returned error can't find the container with id 1567a1859d907e2767d1badf57b83a955c42cb2b51151db12385086ca7840619 Nov 25 13:11:59 crc kubenswrapper[4688]: I1125 13:11:59.956901 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a0483ad2-006e-4eb4-aa60-9fe5c8eed056","Type":"ContainerStarted","Data":"1567a1859d907e2767d1badf57b83a955c42cb2b51151db12385086ca7840619"} Nov 25 13:12:00 crc kubenswrapper[4688]: I1125 13:12:00.754655 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac0877c-91d4-40af-88c9-31fc8cb74e86" path="/var/lib/kubelet/pods/7ac0877c-91d4-40af-88c9-31fc8cb74e86/volumes" Nov 25 13:12:00 crc kubenswrapper[4688]: I1125 13:12:00.979135 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a0483ad2-006e-4eb4-aa60-9fe5c8eed056","Type":"ContainerStarted","Data":"49cc22cbcb61efdc6a25d5624fc30f6fb02222efdcebfc789a0699979fdd6944"} Nov 25 13:12:03 crc kubenswrapper[4688]: I1125 13:12:03.005477 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a0483ad2-006e-4eb4-aa60-9fe5c8eed056","Type":"ContainerStarted","Data":"a3abff54fd39ae54e607ebba158d2958193f45e398a2ca199dd6a30a4cb96dc5"} Nov 25 13:12:04 crc kubenswrapper[4688]: I1125 13:12:04.020846 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a0483ad2-006e-4eb4-aa60-9fe5c8eed056","Type":"ContainerStarted","Data":"8d8cbb1afb50ee5a71825ea206960bf6aff93d58f8d8b65f9ee8877b36a0e7e6"} Nov 25 13:12:04 crc kubenswrapper[4688]: I1125 13:12:04.041990 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.24702667 podStartE2EDuration="6.041971236s" podCreationTimestamp="2025-11-25 13:11:58 +0000 UTC" firstStartedPulling="2025-11-25 13:11:59.85095661 +0000 UTC m=+3469.960585478" lastFinishedPulling="2025-11-25 13:12:03.645901176 +0000 UTC m=+3473.755530044" observedRunningTime="2025-11-25 13:12:04.041708429 +0000 UTC m=+3474.151337297" watchObservedRunningTime="2025-11-25 13:12:04.041971236 +0000 UTC m=+3474.151600104" Nov 25 13:12:04 crc kubenswrapper[4688]: I1125 13:12:04.367810 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:12:04 crc kubenswrapper[4688]: I1125 13:12:04.368215 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:12:04 crc kubenswrapper[4688]: I1125 13:12:04.425307 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:12:05 crc kubenswrapper[4688]: I1125 13:12:05.033869 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a0483ad2-006e-4eb4-aa60-9fe5c8eed056","Type":"ContainerStarted","Data":"c817e8803aad3d274ac94d96568d35e54509aaa2b6706b1dda8525a5ed0a8987"} Nov 25 13:12:05 crc kubenswrapper[4688]: I1125 13:12:05.082785 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49974" Nov 25 13:12:05 crc kubenswrapper[4688]: I1125 13:12:05.184718 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49974"] Nov 25 13:12:05 crc kubenswrapper[4688]: I1125 13:12:05.240565 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fjm2h"] Nov 25 13:12:05 crc kubenswrapper[4688]: I1125 13:12:05.240809 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fjm2h" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="registry-server" containerID="cri-o://66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd" gracePeriod=2 Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.074269 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fjm2h_02febfd8-ed0e-431b-aef6-1ae612335540/registry-server/0.log" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.076702 4688 generic.go:334] "Generic (PLEG): container finished" podID="02febfd8-ed0e-431b-aef6-1ae612335540" containerID="66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd" exitCode=137 Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.076762 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjm2h" event={"ID":"02febfd8-ed0e-431b-aef6-1ae612335540","Type":"ContainerDied","Data":"66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd"} Nov 25 13:12:08 crc kubenswrapper[4688]: E1125 13:12:08.239275 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd is running failed: container process not found" containerID="66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 13:12:08 crc kubenswrapper[4688]: E1125 13:12:08.239919 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd is running failed: container process not found" containerID="66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 13:12:08 crc kubenswrapper[4688]: E1125 13:12:08.240147 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd is running failed: container process not found" containerID="66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 13:12:08 crc kubenswrapper[4688]: E1125 13:12:08.240179 4688 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-fjm2h" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="registry-server" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.393945 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fjm2h_02febfd8-ed0e-431b-aef6-1ae612335540/registry-server/0.log" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.395639 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.496459 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-catalog-content\") pod \"02febfd8-ed0e-431b-aef6-1ae612335540\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.496667 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxxzf\" (UniqueName: \"kubernetes.io/projected/02febfd8-ed0e-431b-aef6-1ae612335540-kube-api-access-nxxzf\") pod \"02febfd8-ed0e-431b-aef6-1ae612335540\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.496723 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-utilities\") pod \"02febfd8-ed0e-431b-aef6-1ae612335540\" (UID: \"02febfd8-ed0e-431b-aef6-1ae612335540\") " Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.502198 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-utilities" (OuterVolumeSpecName: "utilities") pod "02febfd8-ed0e-431b-aef6-1ae612335540" (UID: "02febfd8-ed0e-431b-aef6-1ae612335540"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.507065 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02febfd8-ed0e-431b-aef6-1ae612335540-kube-api-access-nxxzf" (OuterVolumeSpecName: "kube-api-access-nxxzf") pod "02febfd8-ed0e-431b-aef6-1ae612335540" (UID: "02febfd8-ed0e-431b-aef6-1ae612335540"). InnerVolumeSpecName "kube-api-access-nxxzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.598856 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxxzf\" (UniqueName: \"kubernetes.io/projected/02febfd8-ed0e-431b-aef6-1ae612335540-kube-api-access-nxxzf\") on node \"crc\" DevicePath \"\"" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.598898 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.628029 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02febfd8-ed0e-431b-aef6-1ae612335540" (UID: "02febfd8-ed0e-431b-aef6-1ae612335540"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:12:08 crc kubenswrapper[4688]: I1125 13:12:08.700550 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02febfd8-ed0e-431b-aef6-1ae612335540-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:12:09 crc kubenswrapper[4688]: I1125 13:12:09.089494 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fjm2h_02febfd8-ed0e-431b-aef6-1ae612335540/registry-server/0.log" Nov 25 13:12:09 crc kubenswrapper[4688]: I1125 13:12:09.090705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjm2h" event={"ID":"02febfd8-ed0e-431b-aef6-1ae612335540","Type":"ContainerDied","Data":"8995f5ffafa381dbe5a9c3940ceb763e5db81fdb269d696666a8aea115b41a6d"} Nov 25 13:12:09 crc kubenswrapper[4688]: I1125 13:12:09.090750 4688 scope.go:117] "RemoveContainer" containerID="66136a5a12d506ca6d2ea1668759d96a3a3fe1a78e89a4fbaa41a7594f96f8dd" Nov 25 13:12:09 crc kubenswrapper[4688]: I1125 13:12:09.090921 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fjm2h" Nov 25 13:12:09 crc kubenswrapper[4688]: I1125 13:12:09.121804 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fjm2h"] Nov 25 13:12:09 crc kubenswrapper[4688]: I1125 13:12:09.123587 4688 scope.go:117] "RemoveContainer" containerID="bc04c1cab6758635ea4c54d1b2ad74c2a8de68e70d45d6b76745bed88d448e8a" Nov 25 13:12:09 crc kubenswrapper[4688]: I1125 13:12:09.134259 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fjm2h"] Nov 25 13:12:09 crc kubenswrapper[4688]: I1125 13:12:09.150389 4688 scope.go:117] "RemoveContainer" containerID="a4d30163b3ecd72cac6f2ffb299a358743adc8a944c945f6caf164c4dc611d67" Nov 25 13:12:10 crc kubenswrapper[4688]: I1125 13:12:10.767216 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" path="/var/lib/kubelet/pods/02febfd8-ed0e-431b-aef6-1ae612335540/volumes" Nov 25 13:13:17 crc kubenswrapper[4688]: I1125 13:13:17.854363 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:13:17 crc kubenswrapper[4688]: I1125 13:13:17.855029 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:13:47 crc kubenswrapper[4688]: I1125 13:13:47.854318 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:13:47 crc kubenswrapper[4688]: I1125 13:13:47.854968 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:13:54 crc kubenswrapper[4688]: I1125 13:13:54.531161 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/2.log" Nov 25 13:13:58 crc kubenswrapper[4688]: I1125 13:13:58.351392 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:13:58 crc kubenswrapper[4688]: I1125 13:13:58.352181 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="prometheus" containerID="cri-o://32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647" gracePeriod=600 Nov 25 13:13:58 crc kubenswrapper[4688]: I1125 13:13:58.352293 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="thanos-sidecar" containerID="cri-o://132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9" gracePeriod=600 Nov 25 13:13:58 crc kubenswrapper[4688]: I1125 13:13:58.352327 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="config-reloader" containerID="cri-o://39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a" gracePeriod=600 Nov 25 13:13:58 crc kubenswrapper[4688]: I1125 13:13:58.540935 4688 generic.go:334] "Generic (PLEG): container finished" podID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerID="132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9" exitCode=0 Nov 25 13:13:58 crc kubenswrapper[4688]: I1125 13:13:58.541291 4688 generic.go:334] "Generic (PLEG): container finished" podID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerID="32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647" exitCode=0 Nov 25 13:13:58 crc kubenswrapper[4688]: I1125 13:13:58.541132 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerDied","Data":"132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9"} Nov 25 13:13:58 crc kubenswrapper[4688]: I1125 13:13:58.541339 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerDied","Data":"32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647"} Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.090740 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.178546 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config-out\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.178658 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.178708 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-tls-assets\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.178799 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-rulefiles-0\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.178859 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.178969 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.179552 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-secret-combined-ca-bundle\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.179598 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-thanos-prometheus-http-client-file\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.179623 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-db\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.179642 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhtzl\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-kube-api-access-zhtzl\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.179681 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\" (UID: \"0a48f058-02c6-4e26-bbd1-ad023e9ce69a\") " Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.182985 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-db" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "prometheus-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.185102 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.186701 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config" (OuterVolumeSpecName: "config") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.188796 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-kube-api-access-zhtzl" (OuterVolumeSpecName: "kube-api-access-zhtzl") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "kube-api-access-zhtzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.188807 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.195076 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config-out" (OuterVolumeSpecName: "config-out") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.195122 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.195774 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.199676 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.202185 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.275960 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config" (OuterVolumeSpecName: "web-config") pod "0a48f058-02c6-4e26-bbd1-ad023e9ce69a" (UID: "0a48f058-02c6-4e26-bbd1-ad023e9ce69a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284231 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-db\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284270 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhtzl\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-kube-api-access-zhtzl\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284285 4688 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284301 4688 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config-out\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284316 4688 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284328 4688 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284342 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284354 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-config\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284367 4688 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284384 4688 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.284397 4688 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0a48f058-02c6-4e26-bbd1-ad023e9ce69a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.553840 4688 generic.go:334] "Generic (PLEG): container finished" podID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerID="39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a" exitCode=0 Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.553893 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerDied","Data":"39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a"} Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.553903 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.553930 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0a48f058-02c6-4e26-bbd1-ad023e9ce69a","Type":"ContainerDied","Data":"934c87181036659f67715a468c2643c77b6002e5f1b45a449ae761e4b944ab56"} Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.553954 4688 scope.go:117] "RemoveContainer" containerID="132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.586870 4688 scope.go:117] "RemoveContainer" containerID="39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.588653 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.608495 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.613845 4688 scope.go:117] "RemoveContainer" containerID="32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.632572 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.633134 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="thanos-sidecar" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633170 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="thanos-sidecar" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.633194 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="extract-content" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633204 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="extract-content" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.633216 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="prometheus" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633224 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="prometheus" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.633244 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="extract-utilities" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633252 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="extract-utilities" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.633279 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="config-reloader" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633288 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="config-reloader" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.633315 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="registry-server" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633323 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="registry-server" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.633339 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="init-config-reloader" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633346 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="init-config-reloader" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633618 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="prometheus" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633641 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="thanos-sidecar" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633659 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="02febfd8-ed0e-431b-aef6-1ae612335540" containerName="registry-server" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.633671 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" containerName="config-reloader" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.637385 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.640734 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.640959 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.641091 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.641208 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.641815 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.642476 4688 scope.go:117] "RemoveContainer" containerID="d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.645306 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-g9v2p" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.649183 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.651475 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.679828 4688 scope.go:117] "RemoveContainer" containerID="132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.680907 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9\": container with ID starting with 132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9 not found: ID does not exist" containerID="132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.680980 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9"} err="failed to get container status \"132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9\": rpc error: code = NotFound desc = could not find container \"132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9\": container with ID starting with 132d1e83d7faad4f6f3d407323e62360f282ab905f7bbac60f700f65829d0bd9 not found: ID does not exist" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.681008 4688 scope.go:117] "RemoveContainer" containerID="39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.681423 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a\": container with ID starting with 39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a not found: ID does not exist" containerID="39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.681440 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a"} err="failed to get container status \"39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a\": rpc error: code = NotFound desc = could not find container \"39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a\": container with ID starting with 39f3dd3141802a39e75a28691a3abb490fd9c0a2a0442a7fdc119ca401ddbc8a not found: ID does not exist" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.681452 4688 scope.go:117] "RemoveContainer" containerID="32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.682070 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647\": container with ID starting with 32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647 not found: ID does not exist" containerID="32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.682112 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647"} err="failed to get container status \"32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647\": rpc error: code = NotFound desc = could not find container \"32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647\": container with ID starting with 32adc78bffb1eb85e084215cab9145de5c3e4fe17c6f3cff4c7eab696c780647 not found: ID does not exist" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.682145 4688 scope.go:117] "RemoveContainer" containerID="d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766" Nov 25 13:13:59 crc kubenswrapper[4688]: E1125 13:13:59.682450 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766\": container with ID starting with d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766 not found: ID does not exist" containerID="d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.682481 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766"} err="failed to get container status \"d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766\": rpc error: code = NotFound desc = could not find container \"d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766\": container with ID starting with d7d06a428abeffcb52722167b9cfe874fa8fe3e2cd39658661b3e8144bdd0766 not found: ID does not exist" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.691984 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692040 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692073 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/684150f1-16d8-4c3b-87c0-b1db8df1a115-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692137 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692207 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/684150f1-16d8-4c3b-87c0-b1db8df1a115-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692238 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692280 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692310 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5wgl\" (UniqueName: \"kubernetes.io/projected/684150f1-16d8-4c3b-87c0-b1db8df1a115-kube-api-access-c5wgl\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692339 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/684150f1-16d8-4c3b-87c0-b1db8df1a115-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692374 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/684150f1-16d8-4c3b-87c0-b1db8df1a115-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.692411 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-config\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794246 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794320 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5wgl\" (UniqueName: \"kubernetes.io/projected/684150f1-16d8-4c3b-87c0-b1db8df1a115-kube-api-access-c5wgl\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794354 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/684150f1-16d8-4c3b-87c0-b1db8df1a115-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794413 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/684150f1-16d8-4c3b-87c0-b1db8df1a115-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794458 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-config\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794584 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794613 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794641 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/684150f1-16d8-4c3b-87c0-b1db8df1a115-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794724 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794805 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/684150f1-16d8-4c3b-87c0-b1db8df1a115-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.794837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.795792 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/684150f1-16d8-4c3b-87c0-b1db8df1a115-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.795810 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/684150f1-16d8-4c3b-87c0-b1db8df1a115-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.798226 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.798470 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/684150f1-16d8-4c3b-87c0-b1db8df1a115-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.798979 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/684150f1-16d8-4c3b-87c0-b1db8df1a115-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.799068 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.799298 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.799737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.799756 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-config\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.814499 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/684150f1-16d8-4c3b-87c0-b1db8df1a115-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:13:59 crc kubenswrapper[4688]: I1125 13:13:59.819010 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5wgl\" (UniqueName: \"kubernetes.io/projected/684150f1-16d8-4c3b-87c0-b1db8df1a115-kube-api-access-c5wgl\") pod \"prometheus-metric-storage-0\" (UID: \"684150f1-16d8-4c3b-87c0-b1db8df1a115\") " pod="openstack/prometheus-metric-storage-0" Nov 25 13:14:00 crc kubenswrapper[4688]: I1125 13:14:00.018022 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 13:14:00 crc kubenswrapper[4688]: I1125 13:14:00.752605 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a48f058-02c6-4e26-bbd1-ad023e9ce69a" path="/var/lib/kubelet/pods/0a48f058-02c6-4e26-bbd1-ad023e9ce69a/volumes" Nov 25 13:14:01 crc kubenswrapper[4688]: I1125 13:14:01.531392 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 13:14:01 crc kubenswrapper[4688]: I1125 13:14:01.575252 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"684150f1-16d8-4c3b-87c0-b1db8df1a115","Type":"ContainerStarted","Data":"0d29d5c00e9cc34eb80da2cbdd6485cf248a52bf1960af0169b2d38341070831"} Nov 25 13:14:05 crc kubenswrapper[4688]: I1125 13:14:05.619801 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"684150f1-16d8-4c3b-87c0-b1db8df1a115","Type":"ContainerStarted","Data":"3031c754df2ae960ea17ba5ddb56a77b3fc9e051d7b80d948364c082857f2f1a"} Nov 25 13:14:12 crc kubenswrapper[4688]: I1125 13:14:12.690555 4688 generic.go:334] "Generic (PLEG): container finished" podID="684150f1-16d8-4c3b-87c0-b1db8df1a115" containerID="3031c754df2ae960ea17ba5ddb56a77b3fc9e051d7b80d948364c082857f2f1a" exitCode=0 Nov 25 13:14:12 crc kubenswrapper[4688]: I1125 13:14:12.691402 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"684150f1-16d8-4c3b-87c0-b1db8df1a115","Type":"ContainerDied","Data":"3031c754df2ae960ea17ba5ddb56a77b3fc9e051d7b80d948364c082857f2f1a"} Nov 25 13:14:13 crc kubenswrapper[4688]: I1125 13:14:13.702328 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"684150f1-16d8-4c3b-87c0-b1db8df1a115","Type":"ContainerStarted","Data":"4c6f4577cf40820400ebcbb061d1472247ec9e200fba32b9164778a66f965e88"} Nov 25 13:14:16 crc kubenswrapper[4688]: I1125 13:14:16.738446 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"684150f1-16d8-4c3b-87c0-b1db8df1a115","Type":"ContainerStarted","Data":"f99ab1bf68d3660dd9ef21290c3184b26654c92a5cd3f0e8b7a1512f3f6049c5"} Nov 25 13:14:16 crc kubenswrapper[4688]: I1125 13:14:16.752877 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"684150f1-16d8-4c3b-87c0-b1db8df1a115","Type":"ContainerStarted","Data":"d82335d4f4104e388c08f1386aef65be8932723473116bc000be6eb15092189d"} Nov 25 13:14:16 crc kubenswrapper[4688]: I1125 13:14:16.769618 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.769596293 podStartE2EDuration="17.769596293s" podCreationTimestamp="2025-11-25 13:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:14:16.760041754 +0000 UTC m=+3606.869670632" watchObservedRunningTime="2025-11-25 13:14:16.769596293 +0000 UTC m=+3606.879225161" Nov 25 13:14:17 crc kubenswrapper[4688]: I1125 13:14:17.853981 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:14:17 crc kubenswrapper[4688]: I1125 13:14:17.854038 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:14:17 crc kubenswrapper[4688]: I1125 13:14:17.854078 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 13:14:17 crc kubenswrapper[4688]: I1125 13:14:17.854842 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ad0523d7f01b540add0ee302083034de4611d277d3f73ffc112bf68fb001400"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 13:14:17 crc kubenswrapper[4688]: I1125 13:14:17.854914 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://7ad0523d7f01b540add0ee302083034de4611d277d3f73ffc112bf68fb001400" gracePeriod=600 Nov 25 13:14:18 crc kubenswrapper[4688]: I1125 13:14:18.761151 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="7ad0523d7f01b540add0ee302083034de4611d277d3f73ffc112bf68fb001400" exitCode=0 Nov 25 13:14:18 crc kubenswrapper[4688]: I1125 13:14:18.761229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"7ad0523d7f01b540add0ee302083034de4611d277d3f73ffc112bf68fb001400"} Nov 25 13:14:18 crc kubenswrapper[4688]: I1125 13:14:18.761800 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81"} Nov 25 13:14:18 crc kubenswrapper[4688]: I1125 13:14:18.761870 4688 scope.go:117] "RemoveContainer" containerID="eb08d59265332beb0bcdca48d0da604bdc128175503ac34beb10c63a84e9ec60" Nov 25 13:14:20 crc kubenswrapper[4688]: I1125 13:14:20.018683 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 13:14:30 crc kubenswrapper[4688]: I1125 13:14:30.018548 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 13:14:30 crc kubenswrapper[4688]: I1125 13:14:30.025286 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 13:14:30 crc kubenswrapper[4688]: I1125 13:14:30.898077 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.170908 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8"] Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.176140 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.178816 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.180564 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.185926 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8"] Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.304576 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgfs\" (UniqueName: \"kubernetes.io/projected/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-kube-api-access-xlgfs\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.304758 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-secret-volume\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.305025 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-config-volume\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.408023 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlgfs\" (UniqueName: \"kubernetes.io/projected/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-kube-api-access-xlgfs\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.408101 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-secret-volume\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.408186 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-config-volume\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.409397 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-config-volume\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.415019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-secret-volume\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.426029 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlgfs\" (UniqueName: \"kubernetes.io/projected/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-kube-api-access-xlgfs\") pod \"collect-profiles-29401275-skkg8\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.499368 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:00 crc kubenswrapper[4688]: I1125 13:15:00.970224 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8"] Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.179590 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" event={"ID":"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b","Type":"ContainerStarted","Data":"ab1214bf724e706c6a0c7b4358bd741e606a11a18409b75cd66e08c8a54a528f"} Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.179654 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" event={"ID":"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b","Type":"ContainerStarted","Data":"b5600ce0345175a60d6bc44251cfe9ed19552469405fb6e0c39a67adbd4d8a95"} Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.204965 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" podStartSLOduration=1.20494153 podStartE2EDuration="1.20494153s" podCreationTimestamp="2025-11-25 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:15:01.194654412 +0000 UTC m=+3651.304283320" watchObservedRunningTime="2025-11-25 13:15:01.20494153 +0000 UTC m=+3651.314570398" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.299352 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xfgrd"] Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.306415 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.315067 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfgrd"] Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.327012 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25ba3607-a4a2-4bc5-8835-980a9ff6526f-utilities\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.327119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmwb\" (UniqueName: \"kubernetes.io/projected/25ba3607-a4a2-4bc5-8835-980a9ff6526f-kube-api-access-qjmwb\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.327157 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25ba3607-a4a2-4bc5-8835-980a9ff6526f-catalog-content\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.429585 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmwb\" (UniqueName: \"kubernetes.io/projected/25ba3607-a4a2-4bc5-8835-980a9ff6526f-kube-api-access-qjmwb\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.429640 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25ba3607-a4a2-4bc5-8835-980a9ff6526f-catalog-content\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.429771 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25ba3607-a4a2-4bc5-8835-980a9ff6526f-utilities\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.430204 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25ba3607-a4a2-4bc5-8835-980a9ff6526f-utilities\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.430402 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25ba3607-a4a2-4bc5-8835-980a9ff6526f-catalog-content\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.452262 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmwb\" (UniqueName: \"kubernetes.io/projected/25ba3607-a4a2-4bc5-8835-980a9ff6526f-kube-api-access-qjmwb\") pod \"redhat-marketplace-xfgrd\" (UID: \"25ba3607-a4a2-4bc5-8835-980a9ff6526f\") " pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:01 crc kubenswrapper[4688]: I1125 13:15:01.627097 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:02 crc kubenswrapper[4688]: I1125 13:15:02.179897 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfgrd"] Nov 25 13:15:02 crc kubenswrapper[4688]: W1125 13:15:02.186972 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25ba3607_a4a2_4bc5_8835_980a9ff6526f.slice/crio-f07e89cd527f800e0b80a55bee69b8bff5fba09ded8969d96713a486aee76225 WatchSource:0}: Error finding container f07e89cd527f800e0b80a55bee69b8bff5fba09ded8969d96713a486aee76225: Status 404 returned error can't find the container with id f07e89cd527f800e0b80a55bee69b8bff5fba09ded8969d96713a486aee76225 Nov 25 13:15:02 crc kubenswrapper[4688]: I1125 13:15:02.209186 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" event={"ID":"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b","Type":"ContainerDied","Data":"ab1214bf724e706c6a0c7b4358bd741e606a11a18409b75cd66e08c8a54a528f"} Nov 25 13:15:02 crc kubenswrapper[4688]: I1125 13:15:02.209729 4688 generic.go:334] "Generic (PLEG): container finished" podID="ca9bd0c1-0af2-47a3-a986-6c5957a72d4b" containerID="ab1214bf724e706c6a0c7b4358bd741e606a11a18409b75cd66e08c8a54a528f" exitCode=0 Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.229545 4688 generic.go:334] "Generic (PLEG): container finished" podID="25ba3607-a4a2-4bc5-8835-980a9ff6526f" containerID="c5666e20c48ec41a8303705b74bbaf3afcb83bda716dd6f41f5a833e964f47e7" exitCode=0 Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.229627 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfgrd" event={"ID":"25ba3607-a4a2-4bc5-8835-980a9ff6526f","Type":"ContainerDied","Data":"c5666e20c48ec41a8303705b74bbaf3afcb83bda716dd6f41f5a833e964f47e7"} Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.230296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfgrd" event={"ID":"25ba3607-a4a2-4bc5-8835-980a9ff6526f","Type":"ContainerStarted","Data":"f07e89cd527f800e0b80a55bee69b8bff5fba09ded8969d96713a486aee76225"} Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.595322 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.708434 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-config-volume\") pod \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.708495 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlgfs\" (UniqueName: \"kubernetes.io/projected/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-kube-api-access-xlgfs\") pod \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.708592 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-secret-volume\") pod \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\" (UID: \"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b\") " Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.709194 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-config-volume" (OuterVolumeSpecName: "config-volume") pod "ca9bd0c1-0af2-47a3-a986-6c5957a72d4b" (UID: "ca9bd0c1-0af2-47a3-a986-6c5957a72d4b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.714568 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ca9bd0c1-0af2-47a3-a986-6c5957a72d4b" (UID: "ca9bd0c1-0af2-47a3-a986-6c5957a72d4b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.714758 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-kube-api-access-xlgfs" (OuterVolumeSpecName: "kube-api-access-xlgfs") pod "ca9bd0c1-0af2-47a3-a986-6c5957a72d4b" (UID: "ca9bd0c1-0af2-47a3-a986-6c5957a72d4b"). InnerVolumeSpecName "kube-api-access-xlgfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.810657 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.810944 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:03 crc kubenswrapper[4688]: I1125 13:15:03.810954 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlgfs\" (UniqueName: \"kubernetes.io/projected/ca9bd0c1-0af2-47a3-a986-6c5957a72d4b-kube-api-access-xlgfs\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:04 crc kubenswrapper[4688]: I1125 13:15:04.250114 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" event={"ID":"ca9bd0c1-0af2-47a3-a986-6c5957a72d4b","Type":"ContainerDied","Data":"b5600ce0345175a60d6bc44251cfe9ed19552469405fb6e0c39a67adbd4d8a95"} Nov 25 13:15:04 crc kubenswrapper[4688]: I1125 13:15:04.250181 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5600ce0345175a60d6bc44251cfe9ed19552469405fb6e0c39a67adbd4d8a95" Nov 25 13:15:04 crc kubenswrapper[4688]: I1125 13:15:04.250307 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401275-skkg8" Nov 25 13:15:04 crc kubenswrapper[4688]: I1125 13:15:04.304375 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76"] Nov 25 13:15:04 crc kubenswrapper[4688]: I1125 13:15:04.318724 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401230-ljj76"] Nov 25 13:15:04 crc kubenswrapper[4688]: I1125 13:15:04.751898 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7f19c8-5ce9-477f-8930-c3ea89fb14c1" path="/var/lib/kubelet/pods/6a7f19c8-5ce9-477f-8930-c3ea89fb14c1/volumes" Nov 25 13:15:05 crc kubenswrapper[4688]: I1125 13:15:05.901730 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dx5p2"] Nov 25 13:15:05 crc kubenswrapper[4688]: E1125 13:15:05.902617 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9bd0c1-0af2-47a3-a986-6c5957a72d4b" containerName="collect-profiles" Nov 25 13:15:05 crc kubenswrapper[4688]: I1125 13:15:05.902633 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9bd0c1-0af2-47a3-a986-6c5957a72d4b" containerName="collect-profiles" Nov 25 13:15:05 crc kubenswrapper[4688]: I1125 13:15:05.902942 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9bd0c1-0af2-47a3-a986-6c5957a72d4b" containerName="collect-profiles" Nov 25 13:15:05 crc kubenswrapper[4688]: I1125 13:15:05.904835 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:05 crc kubenswrapper[4688]: I1125 13:15:05.916171 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dx5p2"] Nov 25 13:15:05 crc kubenswrapper[4688]: I1125 13:15:05.952508 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-catalog-content\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:05 crc kubenswrapper[4688]: I1125 13:15:05.952878 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qdmp\" (UniqueName: \"kubernetes.io/projected/4c692387-efea-4625-bd2a-3313011d5b41-kube-api-access-6qdmp\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:05 crc kubenswrapper[4688]: I1125 13:15:05.953162 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-utilities\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:06 crc kubenswrapper[4688]: I1125 13:15:06.055402 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-utilities\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:06 crc kubenswrapper[4688]: I1125 13:15:06.055662 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-catalog-content\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:06 crc kubenswrapper[4688]: I1125 13:15:06.055704 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qdmp\" (UniqueName: \"kubernetes.io/projected/4c692387-efea-4625-bd2a-3313011d5b41-kube-api-access-6qdmp\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:06 crc kubenswrapper[4688]: I1125 13:15:06.056663 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-utilities\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:06 crc kubenswrapper[4688]: I1125 13:15:06.056840 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-catalog-content\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:06 crc kubenswrapper[4688]: I1125 13:15:06.083445 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qdmp\" (UniqueName: \"kubernetes.io/projected/4c692387-efea-4625-bd2a-3313011d5b41-kube-api-access-6qdmp\") pod \"certified-operators-dx5p2\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:06 crc kubenswrapper[4688]: I1125 13:15:06.232275 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:06 crc kubenswrapper[4688]: I1125 13:15:06.795611 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dx5p2"] Nov 25 13:15:06 crc kubenswrapper[4688]: W1125 13:15:06.801632 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c692387_efea_4625_bd2a_3313011d5b41.slice/crio-05223c9eee9c9f8da8af5c2d097665da51cbc1791c9a3a7a58f96826aceba2a0 WatchSource:0}: Error finding container 05223c9eee9c9f8da8af5c2d097665da51cbc1791c9a3a7a58f96826aceba2a0: Status 404 returned error can't find the container with id 05223c9eee9c9f8da8af5c2d097665da51cbc1791c9a3a7a58f96826aceba2a0 Nov 25 13:15:07 crc kubenswrapper[4688]: I1125 13:15:07.283499 4688 generic.go:334] "Generic (PLEG): container finished" podID="4c692387-efea-4625-bd2a-3313011d5b41" containerID="c409384a9e272e371a744706b0c11e81e5d990bec8cc24ce71f2b431459a23ea" exitCode=0 Nov 25 13:15:07 crc kubenswrapper[4688]: I1125 13:15:07.283829 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx5p2" event={"ID":"4c692387-efea-4625-bd2a-3313011d5b41","Type":"ContainerDied","Data":"c409384a9e272e371a744706b0c11e81e5d990bec8cc24ce71f2b431459a23ea"} Nov 25 13:15:07 crc kubenswrapper[4688]: I1125 13:15:07.283860 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx5p2" event={"ID":"4c692387-efea-4625-bd2a-3313011d5b41","Type":"ContainerStarted","Data":"05223c9eee9c9f8da8af5c2d097665da51cbc1791c9a3a7a58f96826aceba2a0"} Nov 25 13:15:09 crc kubenswrapper[4688]: I1125 13:15:09.304767 4688 generic.go:334] "Generic (PLEG): container finished" podID="25ba3607-a4a2-4bc5-8835-980a9ff6526f" containerID="5cbac3324794351ef9ffc030c1c942129f7706fc7dd9f0f1b730ef8ca498725b" exitCode=0 Nov 25 13:15:09 crc kubenswrapper[4688]: I1125 13:15:09.304884 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfgrd" event={"ID":"25ba3607-a4a2-4bc5-8835-980a9ff6526f","Type":"ContainerDied","Data":"5cbac3324794351ef9ffc030c1c942129f7706fc7dd9f0f1b730ef8ca498725b"} Nov 25 13:15:10 crc kubenswrapper[4688]: I1125 13:15:10.322763 4688 generic.go:334] "Generic (PLEG): container finished" podID="4c692387-efea-4625-bd2a-3313011d5b41" containerID="1bde1674e8f814a802de83b67bcb69d0362dd4c9f3bca7f4ddfea9031b76b8c7" exitCode=0 Nov 25 13:15:10 crc kubenswrapper[4688]: I1125 13:15:10.322833 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx5p2" event={"ID":"4c692387-efea-4625-bd2a-3313011d5b41","Type":"ContainerDied","Data":"1bde1674e8f814a802de83b67bcb69d0362dd4c9f3bca7f4ddfea9031b76b8c7"} Nov 25 13:15:11 crc kubenswrapper[4688]: I1125 13:15:11.335798 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfgrd" event={"ID":"25ba3607-a4a2-4bc5-8835-980a9ff6526f","Type":"ContainerStarted","Data":"7549c23f8edbf359120e130d539892a87bc359d89795edff9a1fb74638df909e"} Nov 25 13:15:12 crc kubenswrapper[4688]: I1125 13:15:12.367202 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xfgrd" podStartSLOduration=4.884108842 podStartE2EDuration="11.367181281s" podCreationTimestamp="2025-11-25 13:15:01 +0000 UTC" firstStartedPulling="2025-11-25 13:15:03.231759189 +0000 UTC m=+3653.341388057" lastFinishedPulling="2025-11-25 13:15:09.714831618 +0000 UTC m=+3659.824460496" observedRunningTime="2025-11-25 13:15:12.36491381 +0000 UTC m=+3662.474542678" watchObservedRunningTime="2025-11-25 13:15:12.367181281 +0000 UTC m=+3662.476810149" Nov 25 13:15:13 crc kubenswrapper[4688]: I1125 13:15:13.358034 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx5p2" event={"ID":"4c692387-efea-4625-bd2a-3313011d5b41","Type":"ContainerStarted","Data":"c426e36f229b81a025454aeb223dadd7b00a6ad864c7bd38cabd3c4158cb14d7"} Nov 25 13:15:13 crc kubenswrapper[4688]: I1125 13:15:13.383110 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dx5p2" podStartSLOduration=4.412538325 podStartE2EDuration="8.383090908s" podCreationTimestamp="2025-11-25 13:15:05 +0000 UTC" firstStartedPulling="2025-11-25 13:15:08.199654928 +0000 UTC m=+3658.309283796" lastFinishedPulling="2025-11-25 13:15:12.170207511 +0000 UTC m=+3662.279836379" observedRunningTime="2025-11-25 13:15:13.372994085 +0000 UTC m=+3663.482622963" watchObservedRunningTime="2025-11-25 13:15:13.383090908 +0000 UTC m=+3663.492719776" Nov 25 13:15:16 crc kubenswrapper[4688]: I1125 13:15:16.233299 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:16 crc kubenswrapper[4688]: I1125 13:15:16.233944 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:16 crc kubenswrapper[4688]: I1125 13:15:16.288809 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:21 crc kubenswrapper[4688]: I1125 13:15:21.627696 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:21 crc kubenswrapper[4688]: I1125 13:15:21.628364 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:21 crc kubenswrapper[4688]: I1125 13:15:21.676230 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:22 crc kubenswrapper[4688]: I1125 13:15:22.489167 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xfgrd" Nov 25 13:15:24 crc kubenswrapper[4688]: I1125 13:15:24.734991 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfgrd"] Nov 25 13:15:25 crc kubenswrapper[4688]: I1125 13:15:25.292679 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfmlx"] Nov 25 13:15:25 crc kubenswrapper[4688]: I1125 13:15:25.292998 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mfmlx" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="registry-server" containerID="cri-o://51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18" gracePeriod=2 Nov 25 13:15:25 crc kubenswrapper[4688]: I1125 13:15:25.477912 4688 generic.go:334] "Generic (PLEG): container finished" podID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerID="51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18" exitCode=0 Nov 25 13:15:25 crc kubenswrapper[4688]: I1125 13:15:25.477977 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfmlx" event={"ID":"c2722d9c-d618-4c2f-a44e-25bde80431b9","Type":"ContainerDied","Data":"51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18"} Nov 25 13:15:25 crc kubenswrapper[4688]: E1125 13:15:25.634412 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18 is running failed: container process not found" containerID="51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 13:15:25 crc kubenswrapper[4688]: E1125 13:15:25.635055 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18 is running failed: container process not found" containerID="51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 13:15:25 crc kubenswrapper[4688]: E1125 13:15:25.635384 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18 is running failed: container process not found" containerID="51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 13:15:25 crc kubenswrapper[4688]: E1125 13:15:25.635420 4688 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-mfmlx" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="registry-server" Nov 25 13:15:25 crc kubenswrapper[4688]: I1125 13:15:25.955454 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.074636 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td7m4\" (UniqueName: \"kubernetes.io/projected/c2722d9c-d618-4c2f-a44e-25bde80431b9-kube-api-access-td7m4\") pod \"c2722d9c-d618-4c2f-a44e-25bde80431b9\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.074796 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-catalog-content\") pod \"c2722d9c-d618-4c2f-a44e-25bde80431b9\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.074852 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-utilities\") pod \"c2722d9c-d618-4c2f-a44e-25bde80431b9\" (UID: \"c2722d9c-d618-4c2f-a44e-25bde80431b9\") " Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.075992 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-utilities" (OuterVolumeSpecName: "utilities") pod "c2722d9c-d618-4c2f-a44e-25bde80431b9" (UID: "c2722d9c-d618-4c2f-a44e-25bde80431b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.093381 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2722d9c-d618-4c2f-a44e-25bde80431b9-kube-api-access-td7m4" (OuterVolumeSpecName: "kube-api-access-td7m4") pod "c2722d9c-d618-4c2f-a44e-25bde80431b9" (UID: "c2722d9c-d618-4c2f-a44e-25bde80431b9"). InnerVolumeSpecName "kube-api-access-td7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.097004 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2722d9c-d618-4c2f-a44e-25bde80431b9" (UID: "c2722d9c-d618-4c2f-a44e-25bde80431b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.177709 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td7m4\" (UniqueName: \"kubernetes.io/projected/c2722d9c-d618-4c2f-a44e-25bde80431b9-kube-api-access-td7m4\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.177748 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.177760 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2722d9c-d618-4c2f-a44e-25bde80431b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.299119 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.490145 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfmlx" event={"ID":"c2722d9c-d618-4c2f-a44e-25bde80431b9","Type":"ContainerDied","Data":"2faccb33e131458b0beac0917b541de242b3084f9d7a4618a159a7cb85d91d0d"} Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.490191 4688 scope.go:117] "RemoveContainer" containerID="51e95a49110559820eba41255caaa9ba82dfad1d5073cf8e17fcbe3f5b930a18" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.490223 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfmlx" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.527628 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfmlx"] Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.537944 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfmlx"] Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.547424 4688 scope.go:117] "RemoveContainer" containerID="5de1503317b4f2c9b8cf2ec01b6f08b2c4f1bdf48328aa882520fd9cdf49f5c2" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.574901 4688 scope.go:117] "RemoveContainer" containerID="04c5c357a3a28ee79be4a6968eddecf0fd4f01f190c2185d3952a93f97cae1e6" Nov 25 13:15:26 crc kubenswrapper[4688]: I1125 13:15:26.769738 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" path="/var/lib/kubelet/pods/c2722d9c-d618-4c2f-a44e-25bde80431b9/volumes" Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.288798 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dx5p2"] Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.289692 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dx5p2" podUID="4c692387-efea-4625-bd2a-3313011d5b41" containerName="registry-server" containerID="cri-o://c426e36f229b81a025454aeb223dadd7b00a6ad864c7bd38cabd3c4158cb14d7" gracePeriod=2 Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.524897 4688 generic.go:334] "Generic (PLEG): container finished" podID="4c692387-efea-4625-bd2a-3313011d5b41" containerID="c426e36f229b81a025454aeb223dadd7b00a6ad864c7bd38cabd3c4158cb14d7" exitCode=0 Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.524946 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx5p2" event={"ID":"4c692387-efea-4625-bd2a-3313011d5b41","Type":"ContainerDied","Data":"c426e36f229b81a025454aeb223dadd7b00a6ad864c7bd38cabd3c4158cb14d7"} Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.792929 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.953039 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qdmp\" (UniqueName: \"kubernetes.io/projected/4c692387-efea-4625-bd2a-3313011d5b41-kube-api-access-6qdmp\") pod \"4c692387-efea-4625-bd2a-3313011d5b41\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.953248 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-catalog-content\") pod \"4c692387-efea-4625-bd2a-3313011d5b41\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.953283 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-utilities\") pod \"4c692387-efea-4625-bd2a-3313011d5b41\" (UID: \"4c692387-efea-4625-bd2a-3313011d5b41\") " Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.954042 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-utilities" (OuterVolumeSpecName: "utilities") pod "4c692387-efea-4625-bd2a-3313011d5b41" (UID: "4c692387-efea-4625-bd2a-3313011d5b41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:15:29 crc kubenswrapper[4688]: I1125 13:15:29.959184 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c692387-efea-4625-bd2a-3313011d5b41-kube-api-access-6qdmp" (OuterVolumeSpecName: "kube-api-access-6qdmp") pod "4c692387-efea-4625-bd2a-3313011d5b41" (UID: "4c692387-efea-4625-bd2a-3313011d5b41"). InnerVolumeSpecName "kube-api-access-6qdmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.008281 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c692387-efea-4625-bd2a-3313011d5b41" (UID: "4c692387-efea-4625-bd2a-3313011d5b41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.055403 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qdmp\" (UniqueName: \"kubernetes.io/projected/4c692387-efea-4625-bd2a-3313011d5b41-kube-api-access-6qdmp\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.055441 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.055453 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c692387-efea-4625-bd2a-3313011d5b41-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.535770 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx5p2" event={"ID":"4c692387-efea-4625-bd2a-3313011d5b41","Type":"ContainerDied","Data":"05223c9eee9c9f8da8af5c2d097665da51cbc1791c9a3a7a58f96826aceba2a0"} Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.535828 4688 scope.go:117] "RemoveContainer" containerID="c426e36f229b81a025454aeb223dadd7b00a6ad864c7bd38cabd3c4158cb14d7" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.535881 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dx5p2" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.566373 4688 scope.go:117] "RemoveContainer" containerID="1bde1674e8f814a802de83b67bcb69d0362dd4c9f3bca7f4ddfea9031b76b8c7" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.576895 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dx5p2"] Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.586038 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dx5p2"] Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.599046 4688 scope.go:117] "RemoveContainer" containerID="c409384a9e272e371a744706b0c11e81e5d990bec8cc24ce71f2b431459a23ea" Nov 25 13:15:30 crc kubenswrapper[4688]: I1125 13:15:30.751166 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c692387-efea-4625-bd2a-3313011d5b41" path="/var/lib/kubelet/pods/4c692387-efea-4625-bd2a-3313011d5b41/volumes" Nov 25 13:15:35 crc kubenswrapper[4688]: I1125 13:15:35.145901 4688 scope.go:117] "RemoveContainer" containerID="1a835df637867d072881ca33bf73711a3c5454686470355cf1c075edba97c84d" Nov 25 13:15:58 crc kubenswrapper[4688]: I1125 13:15:58.070981 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/2.log" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.922725 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gzgxj/must-gather-n64sb"] Nov 25 13:16:17 crc kubenswrapper[4688]: E1125 13:16:17.923650 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c692387-efea-4625-bd2a-3313011d5b41" containerName="extract-utilities" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.923662 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c692387-efea-4625-bd2a-3313011d5b41" containerName="extract-utilities" Nov 25 13:16:17 crc kubenswrapper[4688]: E1125 13:16:17.923682 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="registry-server" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.923688 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="registry-server" Nov 25 13:16:17 crc kubenswrapper[4688]: E1125 13:16:17.923697 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="extract-utilities" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.923705 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="extract-utilities" Nov 25 13:16:17 crc kubenswrapper[4688]: E1125 13:16:17.923716 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="extract-content" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.923722 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="extract-content" Nov 25 13:16:17 crc kubenswrapper[4688]: E1125 13:16:17.923741 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c692387-efea-4625-bd2a-3313011d5b41" containerName="registry-server" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.923747 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c692387-efea-4625-bd2a-3313011d5b41" containerName="registry-server" Nov 25 13:16:17 crc kubenswrapper[4688]: E1125 13:16:17.923761 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c692387-efea-4625-bd2a-3313011d5b41" containerName="extract-content" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.923767 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c692387-efea-4625-bd2a-3313011d5b41" containerName="extract-content" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.923930 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2722d9c-d618-4c2f-a44e-25bde80431b9" containerName="registry-server" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.923948 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c692387-efea-4625-bd2a-3313011d5b41" containerName="registry-server" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.925046 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.927300 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gzgxj"/"default-dockercfg-w7pkm" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.927314 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gzgxj"/"openshift-service-ca.crt" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.927394 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gzgxj"/"kube-root-ca.crt" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.935214 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdw4\" (UniqueName: \"kubernetes.io/projected/f61c0ee9-6f22-4777-b164-2c48769b3b94-kube-api-access-tfdw4\") pod \"must-gather-n64sb\" (UID: \"f61c0ee9-6f22-4777-b164-2c48769b3b94\") " pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:16:17 crc kubenswrapper[4688]: I1125 13:16:17.935278 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f61c0ee9-6f22-4777-b164-2c48769b3b94-must-gather-output\") pod \"must-gather-n64sb\" (UID: \"f61c0ee9-6f22-4777-b164-2c48769b3b94\") " pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:16:18 crc kubenswrapper[4688]: I1125 13:16:18.018907 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gzgxj/must-gather-n64sb"] Nov 25 13:16:18 crc kubenswrapper[4688]: I1125 13:16:18.037035 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdw4\" (UniqueName: \"kubernetes.io/projected/f61c0ee9-6f22-4777-b164-2c48769b3b94-kube-api-access-tfdw4\") pod \"must-gather-n64sb\" (UID: \"f61c0ee9-6f22-4777-b164-2c48769b3b94\") " pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:16:18 crc kubenswrapper[4688]: I1125 13:16:18.037117 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f61c0ee9-6f22-4777-b164-2c48769b3b94-must-gather-output\") pod \"must-gather-n64sb\" (UID: \"f61c0ee9-6f22-4777-b164-2c48769b3b94\") " pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:16:18 crc kubenswrapper[4688]: I1125 13:16:18.037825 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f61c0ee9-6f22-4777-b164-2c48769b3b94-must-gather-output\") pod \"must-gather-n64sb\" (UID: \"f61c0ee9-6f22-4777-b164-2c48769b3b94\") " pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:16:18 crc kubenswrapper[4688]: I1125 13:16:18.058039 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdw4\" (UniqueName: \"kubernetes.io/projected/f61c0ee9-6f22-4777-b164-2c48769b3b94-kube-api-access-tfdw4\") pod \"must-gather-n64sb\" (UID: \"f61c0ee9-6f22-4777-b164-2c48769b3b94\") " pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:16:18 crc kubenswrapper[4688]: I1125 13:16:18.254824 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:16:18 crc kubenswrapper[4688]: I1125 13:16:18.736918 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gzgxj/must-gather-n64sb"] Nov 25 13:16:19 crc kubenswrapper[4688]: I1125 13:16:19.087540 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/must-gather-n64sb" event={"ID":"f61c0ee9-6f22-4777-b164-2c48769b3b94","Type":"ContainerStarted","Data":"4e1b42994372cfc6a8fbbe2d4bc0d57a795334e6a291fe44f2ef88ed6b29f030"} Nov 25 13:16:30 crc kubenswrapper[4688]: I1125 13:16:30.210389 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/must-gather-n64sb" event={"ID":"f61c0ee9-6f22-4777-b164-2c48769b3b94","Type":"ContainerStarted","Data":"0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb"} Nov 25 13:16:30 crc kubenswrapper[4688]: I1125 13:16:30.211129 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/must-gather-n64sb" event={"ID":"f61c0ee9-6f22-4777-b164-2c48769b3b94","Type":"ContainerStarted","Data":"11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b"} Nov 25 13:16:30 crc kubenswrapper[4688]: I1125 13:16:30.228934 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gzgxj/must-gather-n64sb" podStartSLOduration=3.373809396 podStartE2EDuration="13.228880983s" podCreationTimestamp="2025-11-25 13:16:17 +0000 UTC" firstStartedPulling="2025-11-25 13:16:18.741912923 +0000 UTC m=+3728.851541791" lastFinishedPulling="2025-11-25 13:16:28.59698451 +0000 UTC m=+3738.706613378" observedRunningTime="2025-11-25 13:16:30.228502913 +0000 UTC m=+3740.338131781" watchObservedRunningTime="2025-11-25 13:16:30.228880983 +0000 UTC m=+3740.338509891" Nov 25 13:16:39 crc kubenswrapper[4688]: I1125 13:16:39.667664 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gzgxj/crc-debug-wlr4t"] Nov 25 13:16:39 crc kubenswrapper[4688]: I1125 13:16:39.669489 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:16:39 crc kubenswrapper[4688]: I1125 13:16:39.789741 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44zfv\" (UniqueName: \"kubernetes.io/projected/50e27b1e-28bf-44d8-9ddc-149d8a397205-kube-api-access-44zfv\") pod \"crc-debug-wlr4t\" (UID: \"50e27b1e-28bf-44d8-9ddc-149d8a397205\") " pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:16:39 crc kubenswrapper[4688]: I1125 13:16:39.789879 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50e27b1e-28bf-44d8-9ddc-149d8a397205-host\") pod \"crc-debug-wlr4t\" (UID: \"50e27b1e-28bf-44d8-9ddc-149d8a397205\") " pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:16:39 crc kubenswrapper[4688]: I1125 13:16:39.891433 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44zfv\" (UniqueName: \"kubernetes.io/projected/50e27b1e-28bf-44d8-9ddc-149d8a397205-kube-api-access-44zfv\") pod \"crc-debug-wlr4t\" (UID: \"50e27b1e-28bf-44d8-9ddc-149d8a397205\") " pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:16:39 crc kubenswrapper[4688]: I1125 13:16:39.891522 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50e27b1e-28bf-44d8-9ddc-149d8a397205-host\") pod \"crc-debug-wlr4t\" (UID: \"50e27b1e-28bf-44d8-9ddc-149d8a397205\") " pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:16:39 crc kubenswrapper[4688]: I1125 13:16:39.891630 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50e27b1e-28bf-44d8-9ddc-149d8a397205-host\") pod \"crc-debug-wlr4t\" (UID: \"50e27b1e-28bf-44d8-9ddc-149d8a397205\") " pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:16:39 crc kubenswrapper[4688]: I1125 13:16:39.913759 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44zfv\" (UniqueName: \"kubernetes.io/projected/50e27b1e-28bf-44d8-9ddc-149d8a397205-kube-api-access-44zfv\") pod \"crc-debug-wlr4t\" (UID: \"50e27b1e-28bf-44d8-9ddc-149d8a397205\") " pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:16:40 crc kubenswrapper[4688]: I1125 13:16:40.007421 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:16:40 crc kubenswrapper[4688]: W1125 13:16:40.058867 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50e27b1e_28bf_44d8_9ddc_149d8a397205.slice/crio-0a33fcfdf568c57715653872311f45a6dfdc1f61e95569aff9a65bf9ee3b67c9 WatchSource:0}: Error finding container 0a33fcfdf568c57715653872311f45a6dfdc1f61e95569aff9a65bf9ee3b67c9: Status 404 returned error can't find the container with id 0a33fcfdf568c57715653872311f45a6dfdc1f61e95569aff9a65bf9ee3b67c9 Nov 25 13:16:40 crc kubenswrapper[4688]: I1125 13:16:40.062725 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 13:16:40 crc kubenswrapper[4688]: I1125 13:16:40.314894 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" event={"ID":"50e27b1e-28bf-44d8-9ddc-149d8a397205","Type":"ContainerStarted","Data":"0a33fcfdf568c57715653872311f45a6dfdc1f61e95569aff9a65bf9ee3b67c9"} Nov 25 13:16:42 crc kubenswrapper[4688]: E1125 13:16:42.783626 4688 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.159:44558->38.102.83.159:39911: read tcp 38.102.83.159:44558->38.102.83.159:39911: read: connection reset by peer Nov 25 13:16:47 crc kubenswrapper[4688]: I1125 13:16:47.854383 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:16:47 crc kubenswrapper[4688]: I1125 13:16:47.855036 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:16:55 crc kubenswrapper[4688]: I1125 13:16:55.478459 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" event={"ID":"50e27b1e-28bf-44d8-9ddc-149d8a397205","Type":"ContainerStarted","Data":"907016dd67b8efb2d65de0f50eedfc3043a24b5c798ae05232f7c2a13d784475"} Nov 25 13:16:55 crc kubenswrapper[4688]: I1125 13:16:55.506163 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" podStartSLOduration=2.13103765 podStartE2EDuration="16.506141061s" podCreationTimestamp="2025-11-25 13:16:39 +0000 UTC" firstStartedPulling="2025-11-25 13:16:40.06243667 +0000 UTC m=+3750.172065538" lastFinishedPulling="2025-11-25 13:16:54.437540081 +0000 UTC m=+3764.547168949" observedRunningTime="2025-11-25 13:16:55.500365535 +0000 UTC m=+3765.609994403" watchObservedRunningTime="2025-11-25 13:16:55.506141061 +0000 UTC m=+3765.615769929" Nov 25 13:17:16 crc kubenswrapper[4688]: I1125 13:17:16.707180 4688 generic.go:334] "Generic (PLEG): container finished" podID="50e27b1e-28bf-44d8-9ddc-149d8a397205" containerID="907016dd67b8efb2d65de0f50eedfc3043a24b5c798ae05232f7c2a13d784475" exitCode=0 Nov 25 13:17:16 crc kubenswrapper[4688]: I1125 13:17:16.707372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" event={"ID":"50e27b1e-28bf-44d8-9ddc-149d8a397205","Type":"ContainerDied","Data":"907016dd67b8efb2d65de0f50eedfc3043a24b5c798ae05232f7c2a13d784475"} Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.833184 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.853801 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.853869 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.872250 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gzgxj/crc-debug-wlr4t"] Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.889133 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gzgxj/crc-debug-wlr4t"] Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.954442 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44zfv\" (UniqueName: \"kubernetes.io/projected/50e27b1e-28bf-44d8-9ddc-149d8a397205-kube-api-access-44zfv\") pod \"50e27b1e-28bf-44d8-9ddc-149d8a397205\" (UID: \"50e27b1e-28bf-44d8-9ddc-149d8a397205\") " Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.954969 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50e27b1e-28bf-44d8-9ddc-149d8a397205-host\") pod \"50e27b1e-28bf-44d8-9ddc-149d8a397205\" (UID: \"50e27b1e-28bf-44d8-9ddc-149d8a397205\") " Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.955558 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50e27b1e-28bf-44d8-9ddc-149d8a397205-host" (OuterVolumeSpecName: "host") pod "50e27b1e-28bf-44d8-9ddc-149d8a397205" (UID: "50e27b1e-28bf-44d8-9ddc-149d8a397205"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 13:17:17 crc kubenswrapper[4688]: I1125 13:17:17.977217 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e27b1e-28bf-44d8-9ddc-149d8a397205-kube-api-access-44zfv" (OuterVolumeSpecName: "kube-api-access-44zfv") pod "50e27b1e-28bf-44d8-9ddc-149d8a397205" (UID: "50e27b1e-28bf-44d8-9ddc-149d8a397205"). InnerVolumeSpecName "kube-api-access-44zfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:17:18 crc kubenswrapper[4688]: I1125 13:17:18.056930 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/50e27b1e-28bf-44d8-9ddc-149d8a397205-host\") on node \"crc\" DevicePath \"\"" Nov 25 13:17:18 crc kubenswrapper[4688]: I1125 13:17:18.057286 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44zfv\" (UniqueName: \"kubernetes.io/projected/50e27b1e-28bf-44d8-9ddc-149d8a397205-kube-api-access-44zfv\") on node \"crc\" DevicePath \"\"" Nov 25 13:17:18 crc kubenswrapper[4688]: I1125 13:17:18.728607 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a33fcfdf568c57715653872311f45a6dfdc1f61e95569aff9a65bf9ee3b67c9" Nov 25 13:17:18 crc kubenswrapper[4688]: I1125 13:17:18.728753 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/crc-debug-wlr4t" Nov 25 13:17:18 crc kubenswrapper[4688]: I1125 13:17:18.754718 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e27b1e-28bf-44d8-9ddc-149d8a397205" path="/var/lib/kubelet/pods/50e27b1e-28bf-44d8-9ddc-149d8a397205/volumes" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.090106 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gzgxj/crc-debug-pg9g9"] Nov 25 13:17:19 crc kubenswrapper[4688]: E1125 13:17:19.091439 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e27b1e-28bf-44d8-9ddc-149d8a397205" containerName="container-00" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.091561 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e27b1e-28bf-44d8-9ddc-149d8a397205" containerName="container-00" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.091802 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e27b1e-28bf-44d8-9ddc-149d8a397205" containerName="container-00" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.092556 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.215495 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09a1470a-ae9f-4f7a-a42d-fcb005f88753-host\") pod \"crc-debug-pg9g9\" (UID: \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\") " pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.216203 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94zjn\" (UniqueName: \"kubernetes.io/projected/09a1470a-ae9f-4f7a-a42d-fcb005f88753-kube-api-access-94zjn\") pod \"crc-debug-pg9g9\" (UID: \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\") " pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.317710 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94zjn\" (UniqueName: \"kubernetes.io/projected/09a1470a-ae9f-4f7a-a42d-fcb005f88753-kube-api-access-94zjn\") pod \"crc-debug-pg9g9\" (UID: \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\") " pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.318063 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09a1470a-ae9f-4f7a-a42d-fcb005f88753-host\") pod \"crc-debug-pg9g9\" (UID: \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\") " pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.318159 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09a1470a-ae9f-4f7a-a42d-fcb005f88753-host\") pod \"crc-debug-pg9g9\" (UID: \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\") " pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.335198 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94zjn\" (UniqueName: \"kubernetes.io/projected/09a1470a-ae9f-4f7a-a42d-fcb005f88753-kube-api-access-94zjn\") pod \"crc-debug-pg9g9\" (UID: \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\") " pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.410976 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:19 crc kubenswrapper[4688]: W1125 13:17:19.458977 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09a1470a_ae9f_4f7a_a42d_fcb005f88753.slice/crio-62d2e5b54e52214febe1b8db595834bf6ba1b90640e58cf39582d568f5ec0da1 WatchSource:0}: Error finding container 62d2e5b54e52214febe1b8db595834bf6ba1b90640e58cf39582d568f5ec0da1: Status 404 returned error can't find the container with id 62d2e5b54e52214febe1b8db595834bf6ba1b90640e58cf39582d568f5ec0da1 Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.739743 4688 generic.go:334] "Generic (PLEG): container finished" podID="09a1470a-ae9f-4f7a-a42d-fcb005f88753" containerID="a550a06fb1e65816d5fec4bfb5d25f7fb7d834ca3b63805a0ddfc5627a23aa73" exitCode=1 Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.739841 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" event={"ID":"09a1470a-ae9f-4f7a-a42d-fcb005f88753","Type":"ContainerDied","Data":"a550a06fb1e65816d5fec4bfb5d25f7fb7d834ca3b63805a0ddfc5627a23aa73"} Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.740078 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" event={"ID":"09a1470a-ae9f-4f7a-a42d-fcb005f88753","Type":"ContainerStarted","Data":"62d2e5b54e52214febe1b8db595834bf6ba1b90640e58cf39582d568f5ec0da1"} Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.782407 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gzgxj/crc-debug-pg9g9"] Nov 25 13:17:19 crc kubenswrapper[4688]: I1125 13:17:19.794854 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gzgxj/crc-debug-pg9g9"] Nov 25 13:17:20 crc kubenswrapper[4688]: I1125 13:17:20.859665 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:21 crc kubenswrapper[4688]: I1125 13:17:21.072207 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09a1470a-ae9f-4f7a-a42d-fcb005f88753-host\") pod \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\" (UID: \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\") " Nov 25 13:17:21 crc kubenswrapper[4688]: I1125 13:17:21.072267 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94zjn\" (UniqueName: \"kubernetes.io/projected/09a1470a-ae9f-4f7a-a42d-fcb005f88753-kube-api-access-94zjn\") pod \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\" (UID: \"09a1470a-ae9f-4f7a-a42d-fcb005f88753\") " Nov 25 13:17:21 crc kubenswrapper[4688]: I1125 13:17:21.072682 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09a1470a-ae9f-4f7a-a42d-fcb005f88753-host" (OuterVolumeSpecName: "host") pod "09a1470a-ae9f-4f7a-a42d-fcb005f88753" (UID: "09a1470a-ae9f-4f7a-a42d-fcb005f88753"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 13:17:21 crc kubenswrapper[4688]: I1125 13:17:21.072827 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09a1470a-ae9f-4f7a-a42d-fcb005f88753-host\") on node \"crc\" DevicePath \"\"" Nov 25 13:17:21 crc kubenswrapper[4688]: I1125 13:17:21.079001 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a1470a-ae9f-4f7a-a42d-fcb005f88753-kube-api-access-94zjn" (OuterVolumeSpecName: "kube-api-access-94zjn") pod "09a1470a-ae9f-4f7a-a42d-fcb005f88753" (UID: "09a1470a-ae9f-4f7a-a42d-fcb005f88753"). InnerVolumeSpecName "kube-api-access-94zjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:17:21 crc kubenswrapper[4688]: I1125 13:17:21.174320 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94zjn\" (UniqueName: \"kubernetes.io/projected/09a1470a-ae9f-4f7a-a42d-fcb005f88753-kube-api-access-94zjn\") on node \"crc\" DevicePath \"\"" Nov 25 13:17:21 crc kubenswrapper[4688]: I1125 13:17:21.758314 4688 scope.go:117] "RemoveContainer" containerID="a550a06fb1e65816d5fec4bfb5d25f7fb7d834ca3b63805a0ddfc5627a23aa73" Nov 25 13:17:21 crc kubenswrapper[4688]: I1125 13:17:21.758363 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/crc-debug-pg9g9" Nov 25 13:17:22 crc kubenswrapper[4688]: I1125 13:17:22.752979 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a1470a-ae9f-4f7a-a42d-fcb005f88753" path="/var/lib/kubelet/pods/09a1470a-ae9f-4f7a-a42d-fcb005f88753/volumes" Nov 25 13:17:47 crc kubenswrapper[4688]: I1125 13:17:47.854103 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:17:47 crc kubenswrapper[4688]: I1125 13:17:47.854563 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:17:47 crc kubenswrapper[4688]: I1125 13:17:47.854613 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 13:17:47 crc kubenswrapper[4688]: I1125 13:17:47.855388 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 13:17:47 crc kubenswrapper[4688]: I1125 13:17:47.855446 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" gracePeriod=600 Nov 25 13:17:47 crc kubenswrapper[4688]: E1125 13:17:47.979588 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:17:48 crc kubenswrapper[4688]: I1125 13:17:48.020516 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" exitCode=0 Nov 25 13:17:48 crc kubenswrapper[4688]: I1125 13:17:48.020588 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81"} Nov 25 13:17:48 crc kubenswrapper[4688]: I1125 13:17:48.021306 4688 scope.go:117] "RemoveContainer" containerID="7ad0523d7f01b540add0ee302083034de4611d277d3f73ffc112bf68fb001400" Nov 25 13:17:48 crc kubenswrapper[4688]: I1125 13:17:48.022778 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:17:48 crc kubenswrapper[4688]: E1125 13:17:48.023321 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:18:00 crc kubenswrapper[4688]: I1125 13:18:00.748632 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:18:00 crc kubenswrapper[4688]: E1125 13:18:00.749718 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:18:00 crc kubenswrapper[4688]: I1125 13:18:00.881676 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_585e8555-4253-49c0-a482-9aefd967e4d2/init-config-reloader/0.log" Nov 25 13:18:01 crc kubenswrapper[4688]: I1125 13:18:01.092291 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_585e8555-4253-49c0-a482-9aefd967e4d2/init-config-reloader/0.log" Nov 25 13:18:01 crc kubenswrapper[4688]: I1125 13:18:01.113109 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_585e8555-4253-49c0-a482-9aefd967e4d2/config-reloader/0.log" Nov 25 13:18:01 crc kubenswrapper[4688]: I1125 13:18:01.125786 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_585e8555-4253-49c0-a482-9aefd967e4d2/alertmanager/0.log" Nov 25 13:18:01 crc kubenswrapper[4688]: I1125 13:18:01.704031 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a0483ad2-006e-4eb4-aa60-9fe5c8eed056/aodh-api/0.log" Nov 25 13:18:01 crc kubenswrapper[4688]: I1125 13:18:01.762813 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a0483ad2-006e-4eb4-aa60-9fe5c8eed056/aodh-evaluator/0.log" Nov 25 13:18:01 crc kubenswrapper[4688]: I1125 13:18:01.782578 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a0483ad2-006e-4eb4-aa60-9fe5c8eed056/aodh-listener/0.log" Nov 25 13:18:01 crc kubenswrapper[4688]: I1125 13:18:01.795782 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a0483ad2-006e-4eb4-aa60-9fe5c8eed056/aodh-notifier/0.log" Nov 25 13:18:01 crc kubenswrapper[4688]: I1125 13:18:01.993871 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c8499f65b-2b8f7_0a7c7991-8c0b-481a-81f1-62119d1d47e5/barbican-api/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.022132 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c8499f65b-2b8f7_0a7c7991-8c0b-481a-81f1-62119d1d47e5/barbican-api-log/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.200183 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b7fdfcd-rxxwx_414e5c21-70ba-42cc-b382-558d0c95a1ea/barbican-keystone-listener/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.240048 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b7fdfcd-rxxwx_414e5c21-70ba-42cc-b382-558d0c95a1ea/barbican-keystone-listener-log/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.259228 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-75586f5599-pcl94_edbc7ba6-1fa5-418d-a639-3b88eee1c4fb/barbican-worker/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.412793 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-75586f5599-pcl94_edbc7ba6-1fa5-418d-a639-3b88eee1c4fb/barbican-worker-log/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.546549 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2_9b744290-1dac-4fcf-99d7-6a4a7b2287f6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.680174 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_519ebfe6-d6c7-4205-b0f6-23be8445474f/ceilometer-central-agent/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.761981 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_519ebfe6-d6c7-4205-b0f6-23be8445474f/sg-core/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.777598 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_519ebfe6-d6c7-4205-b0f6-23be8445474f/proxy-httpd/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.838309 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_519ebfe6-d6c7-4205-b0f6-23be8445474f/ceilometer-notification-agent/0.log" Nov 25 13:18:02 crc kubenswrapper[4688]: I1125 13:18:02.990415 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6a92f2f8-f7d9-42da-8f61-d595c6e2e10b/cinder-api/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.042224 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6a92f2f8-f7d9-42da-8f61-d595c6e2e10b/cinder-api-log/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.213101 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_78027e7c-30ce-4ec6-b928-f9b1836c3568/cinder-scheduler/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.326334 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_78027e7c-30ce-4ec6-b928-f9b1836c3568/probe/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.388247 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz_824692a9-2ed3-41c1-a34d-52ae721df261/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.504409 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m_0decfda5-2230-4d90-bc7c-f641bacb6117/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.601574 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ctv5w_26555a3c-6063-42b0-a1ce-18bebfe41afb/init/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.863159 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ctv5w_26555a3c-6063-42b0-a1ce-18bebfe41afb/init/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.911974 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ctv5w_26555a3c-6063-42b0-a1ce-18bebfe41afb/dnsmasq-dns/0.log" Nov 25 13:18:03 crc kubenswrapper[4688]: I1125 13:18:03.933617 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd_6394a29c-847b-438c-826a-03443a7bb430/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:04 crc kubenswrapper[4688]: I1125 13:18:04.099128 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c/glance-httpd/0.log" Nov 25 13:18:04 crc kubenswrapper[4688]: I1125 13:18:04.147385 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c/glance-log/0.log" Nov 25 13:18:04 crc kubenswrapper[4688]: I1125 13:18:04.282548 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_274d19d3-bdcd-44c9-b44e-48f97d1dc4f5/glance-httpd/0.log" Nov 25 13:18:04 crc kubenswrapper[4688]: I1125 13:18:04.302864 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_274d19d3-bdcd-44c9-b44e-48f97d1dc4f5/glance-log/0.log" Nov 25 13:18:04 crc kubenswrapper[4688]: I1125 13:18:04.651468 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7d45cc658d-s47zc_ab8d1502-0fe9-44cb-af7e-8466e27f75d4/heat-api/0.log" Nov 25 13:18:04 crc kubenswrapper[4688]: I1125 13:18:04.864445 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-ff9c99746-zhh6h_6ed213d5-b2be-4cf1-8416-1ec71b9bb32c/heat-engine/0.log" Nov 25 13:18:04 crc kubenswrapper[4688]: I1125 13:18:04.925356 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7_f5155f9f-2994-43db-9adc-665613ab1711/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:04 crc kubenswrapper[4688]: I1125 13:18:04.967740 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7f444b957c-4fqdt_1efc3d47-fd73-4e3c-9357-5fd608383972/heat-cfnapi/0.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.076792 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8ljvl_a432e01c-b2a4-453c-b40b-d8fadf5a1b3b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.227619 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-544b4d8674-8x8rj_aaf46f92-56fe-402e-831c-7641bd8dc3d2/keystone-api/0.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.310921 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401261-jmm7d_a3a1b93f-b907-499e-b150-f2627f93b4b2/keystone-cron/0.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.388312 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_95c2802c-7143-4d63-8959-434c04453333/kube-state-metrics/3.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.390117 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_95c2802c-7143-4d63-8959-434c04453333/kube-state-metrics/2.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.535694 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9_e884e12f-21a9-42e8-815e-78c0108842d8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.778353 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f854495df-t6szb_afe7afb2-5157-4e1b-964f-c402acb02765/neutron-api/0.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.800871 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f854495df-t6szb_afe7afb2-5157-4e1b-964f-c402acb02765/neutron-httpd/0.log" Nov 25 13:18:05 crc kubenswrapper[4688]: I1125 13:18:05.929311 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6_7a8f5458-7f30-4fd1-963f-b3619c7f506f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:06 crc kubenswrapper[4688]: I1125 13:18:06.193131 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a46e21bc-c734-4c5d-a16a-27860cb65ab0/nova-api-log/0.log" Nov 25 13:18:06 crc kubenswrapper[4688]: I1125 13:18:06.398085 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7dd18c71-cfd7-4552-ab55-c0f00f1a5c46/nova-cell0-conductor-conductor/0.log" Nov 25 13:18:06 crc kubenswrapper[4688]: I1125 13:18:06.484967 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a46e21bc-c734-4c5d-a16a-27860cb65ab0/nova-api-api/0.log" Nov 25 13:18:06 crc kubenswrapper[4688]: I1125 13:18:06.514874 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_92f1ea60-7d39-4b4f-911e-7dfdffffe38b/nova-cell1-conductor-conductor/0.log" Nov 25 13:18:06 crc kubenswrapper[4688]: I1125 13:18:06.716627 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_db662a26-0b85-4b43-9dcd-8b21fd64c3e9/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 13:18:06 crc kubenswrapper[4688]: I1125 13:18:06.821075 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-z88rx_79315b1a-e9e2-422c-8be3-97ebdb2038c0/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:07 crc kubenswrapper[4688]: I1125 13:18:07.063833 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ac5d7790-506b-40b0-9721-6cff85ff053e/nova-metadata-log/0.log" Nov 25 13:18:07 crc kubenswrapper[4688]: I1125 13:18:07.294440 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_eca13d90-02a1-44cc-88ed-dabdce12144a/nova-scheduler-scheduler/0.log" Nov 25 13:18:07 crc kubenswrapper[4688]: I1125 13:18:07.373974 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_666dbc1a-fbdf-4ff1-b949-926ea3e70472/mysql-bootstrap/0.log" Nov 25 13:18:07 crc kubenswrapper[4688]: I1125 13:18:07.511153 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_666dbc1a-fbdf-4ff1-b949-926ea3e70472/mysql-bootstrap/0.log" Nov 25 13:18:07 crc kubenswrapper[4688]: I1125 13:18:07.521900 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_666dbc1a-fbdf-4ff1-b949-926ea3e70472/galera/0.log" Nov 25 13:18:07 crc kubenswrapper[4688]: I1125 13:18:07.748086 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3c9c32d1-459d-4c35-8cf3-876542a657e9/mysql-bootstrap/0.log" Nov 25 13:18:07 crc kubenswrapper[4688]: I1125 13:18:07.938507 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3c9c32d1-459d-4c35-8cf3-876542a657e9/galera/0.log" Nov 25 13:18:07 crc kubenswrapper[4688]: I1125 13:18:07.949767 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3c9c32d1-459d-4c35-8cf3-876542a657e9/mysql-bootstrap/0.log" Nov 25 13:18:08 crc kubenswrapper[4688]: I1125 13:18:08.169548 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3ecf7482-aefd-4e71-a856-9818296c91e7/openstackclient/0.log" Nov 25 13:18:08 crc kubenswrapper[4688]: I1125 13:18:08.228310 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-92hwc_4ac126ff-ac63-4d6a-b201-e6dbd8ba3153/ovn-controller/0.log" Nov 25 13:18:08 crc kubenswrapper[4688]: I1125 13:18:08.319637 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ac5d7790-506b-40b0-9721-6cff85ff053e/nova-metadata-metadata/0.log" Nov 25 13:18:08 crc kubenswrapper[4688]: I1125 13:18:08.469646 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sfg7j_cfcc3ad5-018f-4723-bd38-1384baf3d72e/openstack-network-exporter/0.log" Nov 25 13:18:08 crc kubenswrapper[4688]: I1125 13:18:08.559125 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nndbf_d6f000f3-dc04-44a4-b019-d41633753240/ovsdb-server-init/0.log" Nov 25 13:18:08 crc kubenswrapper[4688]: I1125 13:18:08.781995 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nndbf_d6f000f3-dc04-44a4-b019-d41633753240/ovsdb-server-init/0.log" Nov 25 13:18:08 crc kubenswrapper[4688]: I1125 13:18:08.809624 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nndbf_d6f000f3-dc04-44a4-b019-d41633753240/ovsdb-server/0.log" Nov 25 13:18:08 crc kubenswrapper[4688]: I1125 13:18:08.814919 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nndbf_d6f000f3-dc04-44a4-b019-d41633753240/ovs-vswitchd/0.log" Nov 25 13:18:09 crc kubenswrapper[4688]: I1125 13:18:09.034294 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0dd00154-420b-4be7-84de-ea971d680ff3/ovn-northd/0.log" Nov 25 13:18:09 crc kubenswrapper[4688]: I1125 13:18:09.054277 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dgk5q_ec1f886e-aff6-4077-a770-5bf03fe54bc9/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:09 crc kubenswrapper[4688]: I1125 13:18:09.216369 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0dd00154-420b-4be7-84de-ea971d680ff3/openstack-network-exporter/0.log" Nov 25 13:18:09 crc kubenswrapper[4688]: I1125 13:18:09.540653 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_33b0d963-d13d-4b40-b458-b85ec4f10131/ovsdbserver-nb/0.log" Nov 25 13:18:09 crc kubenswrapper[4688]: I1125 13:18:09.542655 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_33b0d963-d13d-4b40-b458-b85ec4f10131/openstack-network-exporter/0.log" Nov 25 13:18:09 crc kubenswrapper[4688]: I1125 13:18:09.782629 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3644aadc-3c20-41f5-8969-f84b941eef27/ovsdbserver-sb/0.log" Nov 25 13:18:09 crc kubenswrapper[4688]: I1125 13:18:09.808706 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3644aadc-3c20-41f5-8969-f84b941eef27/openstack-network-exporter/0.log" Nov 25 13:18:09 crc kubenswrapper[4688]: I1125 13:18:09.938469 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85ddfb974d-m4b6g_e27f5e13-7857-4d61-bbe1-cb74fb57f7d4/placement-api/0.log" Nov 25 13:18:10 crc kubenswrapper[4688]: I1125 13:18:10.033758 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/init-config-reloader/0.log" Nov 25 13:18:10 crc kubenswrapper[4688]: I1125 13:18:10.072389 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85ddfb974d-m4b6g_e27f5e13-7857-4d61-bbe1-cb74fb57f7d4/placement-log/0.log" Nov 25 13:18:10 crc kubenswrapper[4688]: I1125 13:18:10.309405 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/prometheus/0.log" Nov 25 13:18:10 crc kubenswrapper[4688]: I1125 13:18:10.319132 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/config-reloader/0.log" Nov 25 13:18:10 crc kubenswrapper[4688]: I1125 13:18:10.373330 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/thanos-sidecar/0.log" Nov 25 13:18:10 crc kubenswrapper[4688]: I1125 13:18:10.389146 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/init-config-reloader/0.log" Nov 25 13:18:10 crc kubenswrapper[4688]: I1125 13:18:10.499271 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_31cb28aa-9d13-4a28-b87d-85abb3af9cef/setup-container/0.log" Nov 25 13:18:11 crc kubenswrapper[4688]: I1125 13:18:11.389783 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_31cb28aa-9d13-4a28-b87d-85abb3af9cef/rabbitmq/0.log" Nov 25 13:18:11 crc kubenswrapper[4688]: I1125 13:18:11.390128 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_31cb28aa-9d13-4a28-b87d-85abb3af9cef/setup-container/0.log" Nov 25 13:18:11 crc kubenswrapper[4688]: I1125 13:18:11.397121 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_24997c07-a110-43df-accd-9daeeff9a29c/setup-container/0.log" Nov 25 13:18:11 crc kubenswrapper[4688]: I1125 13:18:11.629547 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_24997c07-a110-43df-accd-9daeeff9a29c/setup-container/0.log" Nov 25 13:18:11 crc kubenswrapper[4688]: I1125 13:18:11.660455 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_24997c07-a110-43df-accd-9daeeff9a29c/rabbitmq/0.log" Nov 25 13:18:11 crc kubenswrapper[4688]: I1125 13:18:11.672572 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k_9463536c-fd6c-4aee-b3a9-c7f20996f5c7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:11 crc kubenswrapper[4688]: I1125 13:18:11.873080 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8b8pz_567fcefc-5ba1-449d-959c-3209a8d586a9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:11 crc kubenswrapper[4688]: I1125 13:18:11.910724 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d_e2224f64-766c-4746-b65f-8e235c609a74/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.120824 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-x8w6p_09188b82-0612-4538-b6ca-7517d7da935b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.183386 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-zk4hf_c3ed5d98-1ee9-4de0-9387-cb3082a348bd/ssh-known-hosts-edpm-deployment/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.471832 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5dcbb6d5d7-bpx7g_2c3f8ead-c9ee-4ce5-923a-558a17e1f688/proxy-server/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.587394 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5dcbb6d5d7-bpx7g_2c3f8ead-c9ee-4ce5-923a-558a17e1f688/proxy-httpd/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.653425 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-brb2n_a917ea03-c867-4449-a317-2ed904672efa/swift-ring-rebalance/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.739626 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:18:12 crc kubenswrapper[4688]: E1125 13:18:12.739918 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.773794 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/account-reaper/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.848969 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/account-auditor/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.916448 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/account-replicator/0.log" Nov 25 13:18:12 crc kubenswrapper[4688]: I1125 13:18:12.986714 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/account-server/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.056752 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/container-auditor/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.084844 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/container-replicator/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.184184 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/container-server/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.188726 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/container-updater/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.279827 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-auditor/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.294874 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-expirer/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.435748 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-server/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.435755 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-replicator/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.492179 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-updater/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.531995 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/rsync/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.622149 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/swift-recon-cron/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.765076 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-65sz7_34495de9-ab63-49d9-b01f-a07ec58b7a3f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:13 crc kubenswrapper[4688]: I1125 13:18:13.951995 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr_1f744c38-f708-44f5-952a-419118bcade4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:18:22 crc kubenswrapper[4688]: I1125 13:18:22.758669 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_412ee2a8-6c40-4142-8e09-05f4c22862c0/memcached/0.log" Nov 25 13:18:25 crc kubenswrapper[4688]: I1125 13:18:25.739982 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:18:25 crc kubenswrapper[4688]: E1125 13:18:25.740431 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:18:38 crc kubenswrapper[4688]: I1125 13:18:38.740090 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:18:38 crc kubenswrapper[4688]: E1125 13:18:38.740791 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.341806 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/util/0.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.550496 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/pull/0.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.553062 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/util/0.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.573827 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/pull/0.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.732997 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/pull/0.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.769401 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/util/0.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.772965 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/extract/0.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.946697 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-q4ffj_6efe1c76-76a3-4c72-bb71-0963553bbb98/manager/1.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.948656 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-q4ffj_6efe1c76-76a3-4c72-bb71-0963553bbb98/kube-rbac-proxy/0.log" Nov 25 13:18:41 crc kubenswrapper[4688]: I1125 13:18:41.970974 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-q4ffj_6efe1c76-76a3-4c72-bb71-0963553bbb98/manager/2.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.126144 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-ptqrp_87bbdcd1-48cf-4310-9131-93dadc55a0f1/kube-rbac-proxy/0.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.165809 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-ptqrp_87bbdcd1-48cf-4310-9131-93dadc55a0f1/manager/2.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.187229 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-ptqrp_87bbdcd1-48cf-4310-9131-93dadc55a0f1/manager/1.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.327798 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-vkj6d_acc9de1c-caf4-40f2-8e3c-470f1059599a/kube-rbac-proxy/0.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.388905 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-vkj6d_acc9de1c-caf4-40f2-8e3c-470f1059599a/manager/1.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.429829 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-vkj6d_acc9de1c-caf4-40f2-8e3c-470f1059599a/manager/2.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.557532 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-2snng_5c7a1a6d-a3f3-4490-a6ba-f521535a1364/kube-rbac-proxy/0.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.597184 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-2snng_5c7a1a6d-a3f3-4490-a6ba-f521535a1364/manager/1.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.627683 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-2snng_5c7a1a6d-a3f3-4490-a6ba-f521535a1364/manager/2.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.816898 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-b9jdn_92794534-2689-4fde-8597-4cc766d7b3b0/kube-rbac-proxy/0.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.819057 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-b9jdn_92794534-2689-4fde-8597-4cc766d7b3b0/manager/2.log" Nov 25 13:18:42 crc kubenswrapper[4688]: I1125 13:18:42.848498 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-b9jdn_92794534-2689-4fde-8597-4cc766d7b3b0/manager/1.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.018313 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zfvn2_55967ae9-2dad-4d45-a8c3-bdaa483f9ea7/kube-rbac-proxy/0.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.027872 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zfvn2_55967ae9-2dad-4d45-a8c3-bdaa483f9ea7/manager/2.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.076437 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zfvn2_55967ae9-2dad-4d45-a8c3-bdaa483f9ea7/manager/1.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.237058 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-q2tdz_0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d/kube-rbac-proxy/0.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.295106 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-q2tdz_0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d/manager/2.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.338455 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-q2tdz_0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d/manager/1.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.469412 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-tn6tq_78451e33-7e86-4635-ac5f-d2c6a9ae6e71/kube-rbac-proxy/0.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.496977 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-tn6tq_78451e33-7e86-4635-ac5f-d2c6a9ae6e71/manager/2.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.523451 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-tn6tq_78451e33-7e86-4635-ac5f-d2c6a9ae6e71/manager/1.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.689196 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-9qfpp_592ea8b1-efc4-4027-a7dc-3943125fd935/kube-rbac-proxy/0.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.716576 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-9qfpp_592ea8b1-efc4-4027-a7dc-3943125fd935/manager/2.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.734602 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-9qfpp_592ea8b1-efc4-4027-a7dc-3943125fd935/manager/1.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.940038 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-vcnvc_94f12846-9cbe-4997-9160-3545778ecfde/kube-rbac-proxy/0.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.941280 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-vcnvc_94f12846-9cbe-4997-9160-3545778ecfde/manager/2.log" Nov 25 13:18:43 crc kubenswrapper[4688]: I1125 13:18:43.967749 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-vcnvc_94f12846-9cbe-4997-9160-3545778ecfde/manager/1.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.141475 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-94snn_7cd9dc7e-be06-416a-aebe-c0b160c79697/kube-rbac-proxy/0.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.146816 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-94snn_7cd9dc7e-be06-416a-aebe-c0b160c79697/manager/2.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.301280 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-94snn_7cd9dc7e-be06-416a-aebe-c0b160c79697/manager/1.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.473344 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-ltlms_808a5b9f-95a2-4f58-abe2-30758a6a7e2a/kube-rbac-proxy/0.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.509675 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-ltlms_808a5b9f-95a2-4f58-abe2-30758a6a7e2a/manager/2.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.529506 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-ltlms_808a5b9f-95a2-4f58-abe2-30758a6a7e2a/manager/1.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.545700 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-kvt5r_e2f91df4-3b39-4c05-9fee-dd3f7622fd13/kube-rbac-proxy/0.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.671824 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-kvt5r_e2f91df4-3b39-4c05-9fee-dd3f7622fd13/manager/2.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.723447 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-kvt5r_e2f91df4-3b39-4c05-9fee-dd3f7622fd13/manager/1.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.744911 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4zlm5_6efa691a-9f05-4d6a-8517-cba5b00426cd/kube-rbac-proxy/0.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.787967 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4zlm5_6efa691a-9f05-4d6a-8517-cba5b00426cd/manager/2.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.845803 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4zlm5_6efa691a-9f05-4d6a-8517-cba5b00426cd/manager/1.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.920021 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh_7f63e16e-9d9b-4e1a-b497-1417e8e7b79e/kube-rbac-proxy/0.log" Nov 25 13:18:44 crc kubenswrapper[4688]: I1125 13:18:44.941587 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh_7f63e16e-9d9b-4e1a-b497-1417e8e7b79e/manager/1.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.017009 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh_7f63e16e-9d9b-4e1a-b497-1417e8e7b79e/manager/0.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.125801 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_1364865a-3285-428d-b672-064400c43c94/manager/2.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.304090 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-9644ff45d-57xk4_34cf6884-a630-417d-81ff-08c5ff19be31/operator/1.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.388012 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-47kgw_eef7b939-1770-40d0-8ba8-9458f9160a52/registry-server/0.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.453257 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-9644ff45d-57xk4_34cf6884-a630-417d-81ff-08c5ff19be31/operator/0.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.651720 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-gzslz_fa49233e-de1b-4bea-85a6-de285e0e60f6/kube-rbac-proxy/0.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.664495 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_1364865a-3285-428d-b672-064400c43c94/manager/3.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.675748 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-gzslz_fa49233e-de1b-4bea-85a6-de285e0e60f6/manager/1.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.776000 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-gzslz_fa49233e-de1b-4bea-85a6-de285e0e60f6/manager/2.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.878042 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-nxnmg_3649a66a-709f-4b77-b798-e5f90eeb2e5d/manager/2.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.895754 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-nxnmg_3649a66a-709f-4b77-b798-e5f90eeb2e5d/manager/3.log" Nov 25 13:18:45 crc kubenswrapper[4688]: I1125 13:18:45.922626 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-nxnmg_3649a66a-709f-4b77-b798-e5f90eeb2e5d/kube-rbac-proxy/0.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.115929 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wf6w6_93553656-ef25-4318-81f1-a4e7f973ed38/operator/3.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.130180 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wf6w6_93553656-ef25-4318-81f1-a4e7f973ed38/operator/2.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.147613 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-c76gt_d4c78fcc-139a-4485-8628-dc14422a4710/kube-rbac-proxy/0.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.285961 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-c76gt_d4c78fcc-139a-4485-8628-dc14422a4710/manager/3.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.310385 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-c76gt_d4c78fcc-139a-4485-8628-dc14422a4710/manager/2.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.377932 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/kube-rbac-proxy/0.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.461166 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/2.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.493744 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/1.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.530160 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dcnc8_59ac66df-a38a-4193-a6ff-fd4e74b1b113/kube-rbac-proxy/0.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.623111 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dcnc8_59ac66df-a38a-4193-a6ff-fd4e74b1b113/manager/1.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.690030 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-gf8vv_ae188502-8c93-4a53-bb69-b9a964c82bc6/manager/2.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.714708 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-gf8vv_ae188502-8c93-4a53-bb69-b9a964c82bc6/kube-rbac-proxy/0.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.715821 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dcnc8_59ac66df-a38a-4193-a6ff-fd4e74b1b113/manager/0.log" Nov 25 13:18:46 crc kubenswrapper[4688]: I1125 13:18:46.849637 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-gf8vv_ae188502-8c93-4a53-bb69-b9a964c82bc6/manager/1.log" Nov 25 13:18:50 crc kubenswrapper[4688]: I1125 13:18:50.747555 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:18:50 crc kubenswrapper[4688]: E1125 13:18:50.748372 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:19:01 crc kubenswrapper[4688]: I1125 13:19:01.739877 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:19:01 crc kubenswrapper[4688]: E1125 13:19:01.742013 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:19:05 crc kubenswrapper[4688]: I1125 13:19:05.149828 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qm98m_486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c/control-plane-machine-set-operator/0.log" Nov 25 13:19:05 crc kubenswrapper[4688]: I1125 13:19:05.339875 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8djqh_199bf3df-657c-4fec-99c8-00abf00d41c0/kube-rbac-proxy/0.log" Nov 25 13:19:05 crc kubenswrapper[4688]: I1125 13:19:05.349501 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8djqh_199bf3df-657c-4fec-99c8-00abf00d41c0/machine-api-operator/0.log" Nov 25 13:19:16 crc kubenswrapper[4688]: I1125 13:19:16.739985 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:19:16 crc kubenswrapper[4688]: E1125 13:19:16.740615 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:19:19 crc kubenswrapper[4688]: I1125 13:19:19.776622 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jhcbt_8f855a3c-ac32-447f-8fca-8228aa44f91a/cert-manager-controller/0.log" Nov 25 13:19:19 crc kubenswrapper[4688]: I1125 13:19:19.812764 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-d2tx4_3d15c9cb-bc3f-4042-a05e-1a6e66e4348c/cert-manager-cainjector/1.log" Nov 25 13:19:20 crc kubenswrapper[4688]: I1125 13:19:20.006640 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-f4bkk_e46b8030-eb14-4ce3-9519-fdaf23f4f7cb/cert-manager-webhook/0.log" Nov 25 13:19:20 crc kubenswrapper[4688]: I1125 13:19:20.013029 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-d2tx4_3d15c9cb-bc3f-4042-a05e-1a6e66e4348c/cert-manager-cainjector/0.log" Nov 25 13:19:27 crc kubenswrapper[4688]: I1125 13:19:27.740756 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:19:27 crc kubenswrapper[4688]: E1125 13:19:27.741577 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:19:34 crc kubenswrapper[4688]: I1125 13:19:34.894111 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-h8w7b_6bd5cb4b-20c0-4042-b348-001e8084c2f4/nmstate-console-plugin/0.log" Nov 25 13:19:35 crc kubenswrapper[4688]: I1125 13:19:35.029805 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-8pxc2_d63a65a2-b47a-49e4-8489-f7aee9d6929d/kube-rbac-proxy/0.log" Nov 25 13:19:35 crc kubenswrapper[4688]: I1125 13:19:35.040961 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vc899_48a378b8-e17a-41c3-b612-a9c503dcbc58/nmstate-handler/0.log" Nov 25 13:19:35 crc kubenswrapper[4688]: I1125 13:19:35.113750 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-8pxc2_d63a65a2-b47a-49e4-8489-f7aee9d6929d/nmstate-metrics/0.log" Nov 25 13:19:35 crc kubenswrapper[4688]: I1125 13:19:35.258153 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-l75hn_c08e5e0c-e882-43da-8211-ab86d099db71/nmstate-operator/0.log" Nov 25 13:19:35 crc kubenswrapper[4688]: I1125 13:19:35.375380 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-xjfsz_af18cbb6-5f3d-4fa6-914a-421fe283aa4e/nmstate-webhook/0.log" Nov 25 13:19:38 crc kubenswrapper[4688]: I1125 13:19:38.740799 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:19:38 crc kubenswrapper[4688]: E1125 13:19:38.741694 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:19:50 crc kubenswrapper[4688]: I1125 13:19:50.391950 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-8kfck_e48219fb-3aae-42ff-8dec-3d952e97aff1/kube-rbac-proxy/0.log" Nov 25 13:19:50 crc kubenswrapper[4688]: I1125 13:19:50.633215 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-8kfck_e48219fb-3aae-42ff-8dec-3d952e97aff1/controller/0.log" Nov 25 13:19:50 crc kubenswrapper[4688]: I1125 13:19:50.649720 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-frr-files/0.log" Nov 25 13:19:50 crc kubenswrapper[4688]: I1125 13:19:50.745909 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:19:50 crc kubenswrapper[4688]: E1125 13:19:50.746332 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:19:50 crc kubenswrapper[4688]: I1125 13:19:50.830010 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-metrics/0.log" Nov 25 13:19:50 crc kubenswrapper[4688]: I1125 13:19:50.866382 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-frr-files/0.log" Nov 25 13:19:50 crc kubenswrapper[4688]: I1125 13:19:50.878990 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-reloader/0.log" Nov 25 13:19:50 crc kubenswrapper[4688]: I1125 13:19:50.921493 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-reloader/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.059607 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-metrics/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.102020 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-frr-files/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.112113 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-metrics/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.118755 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-reloader/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.313288 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-frr-files/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.314127 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-metrics/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.347820 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-reloader/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.414262 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/controller/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.490588 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/frr-metrics/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.523515 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/kube-rbac-proxy/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.682281 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/kube-rbac-proxy-frr/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.708542 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/reloader/0.log" Nov 25 13:19:51 crc kubenswrapper[4688]: I1125 13:19:51.918478 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-bdmv9_a8a77cbc-9814-4996-9ee3-d1e63f581842/frr-k8s-webhook-server/0.log" Nov 25 13:19:52 crc kubenswrapper[4688]: I1125 13:19:52.047893 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-744bc4ddc8-58c5m_d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3/manager/3.log" Nov 25 13:19:52 crc kubenswrapper[4688]: I1125 13:19:52.212707 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-744bc4ddc8-58c5m_d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3/manager/2.log" Nov 25 13:19:52 crc kubenswrapper[4688]: I1125 13:19:52.326749 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-94654dbc4-7h22l_29f74196-0858-470b-8d69-2a8c67753827/webhook-server/0.log" Nov 25 13:19:52 crc kubenswrapper[4688]: I1125 13:19:52.558461 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lthrr_4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7/kube-rbac-proxy/0.log" Nov 25 13:19:53 crc kubenswrapper[4688]: I1125 13:19:53.016088 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/frr/0.log" Nov 25 13:19:53 crc kubenswrapper[4688]: I1125 13:19:53.186984 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lthrr_4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7/speaker/0.log" Nov 25 13:20:01 crc kubenswrapper[4688]: I1125 13:20:01.740236 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:20:01 crc kubenswrapper[4688]: E1125 13:20:01.741015 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:20:05 crc kubenswrapper[4688]: I1125 13:20:05.650768 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/util/0.log" Nov 25 13:20:05 crc kubenswrapper[4688]: I1125 13:20:05.822670 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/util/0.log" Nov 25 13:20:05 crc kubenswrapper[4688]: I1125 13:20:05.834150 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/pull/0.log" Nov 25 13:20:05 crc kubenswrapper[4688]: I1125 13:20:05.840183 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/pull/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.029991 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/pull/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.063291 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/extract/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.078478 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/util/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.195018 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/util/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.402327 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/pull/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.423025 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/util/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.449613 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/pull/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.559400 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/util/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.617946 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/pull/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.629770 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/extract/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.798994 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-utilities/0.log" Nov 25 13:20:06 crc kubenswrapper[4688]: I1125 13:20:06.979666 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-content/0.log" Nov 25 13:20:07 crc kubenswrapper[4688]: I1125 13:20:07.028379 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-content/0.log" Nov 25 13:20:07 crc kubenswrapper[4688]: I1125 13:20:07.028424 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-utilities/0.log" Nov 25 13:20:07 crc kubenswrapper[4688]: I1125 13:20:07.224570 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-utilities/0.log" Nov 25 13:20:07 crc kubenswrapper[4688]: I1125 13:20:07.224593 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-content/0.log" Nov 25 13:20:07 crc kubenswrapper[4688]: I1125 13:20:07.624928 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-utilities/0.log" Nov 25 13:20:07 crc kubenswrapper[4688]: I1125 13:20:07.805964 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-content/0.log" Nov 25 13:20:07 crc kubenswrapper[4688]: I1125 13:20:07.807603 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-utilities/0.log" Nov 25 13:20:07 crc kubenswrapper[4688]: I1125 13:20:07.828472 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-content/0.log" Nov 25 13:20:08 crc kubenswrapper[4688]: I1125 13:20:08.110600 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-utilities/0.log" Nov 25 13:20:08 crc kubenswrapper[4688]: I1125 13:20:08.160547 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-content/0.log" Nov 25 13:20:08 crc kubenswrapper[4688]: I1125 13:20:08.225694 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/registry-server/0.log" Nov 25 13:20:08 crc kubenswrapper[4688]: I1125 13:20:08.389924 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/util/0.log" Nov 25 13:20:08 crc kubenswrapper[4688]: I1125 13:20:08.655181 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/util/0.log" Nov 25 13:20:08 crc kubenswrapper[4688]: I1125 13:20:08.756293 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/pull/0.log" Nov 25 13:20:08 crc kubenswrapper[4688]: I1125 13:20:08.869939 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/pull/0.log" Nov 25 13:20:08 crc kubenswrapper[4688]: I1125 13:20:08.978642 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/util/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.027374 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/registry-server/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.072422 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/pull/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.131205 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/extract/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.216215 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ps6lt_499bbc68-a6dd-4670-acef-2dfcce904fc3/marketplace-operator/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.315352 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-utilities/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.495514 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-content/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.523368 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-content/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.542811 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-utilities/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.696154 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-utilities/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.726632 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-content/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.781692 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-utilities/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.792770 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/registry-server/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.933778 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-utilities/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.970941 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-content/0.log" Nov 25 13:20:09 crc kubenswrapper[4688]: I1125 13:20:09.999567 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-content/0.log" Nov 25 13:20:10 crc kubenswrapper[4688]: I1125 13:20:10.143587 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-utilities/0.log" Nov 25 13:20:10 crc kubenswrapper[4688]: I1125 13:20:10.160887 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-content/0.log" Nov 25 13:20:10 crc kubenswrapper[4688]: I1125 13:20:10.369645 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/registry-server/0.log" Nov 25 13:20:16 crc kubenswrapper[4688]: I1125 13:20:16.740447 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:20:16 crc kubenswrapper[4688]: E1125 13:20:16.741228 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:20:22 crc kubenswrapper[4688]: I1125 13:20:22.429819 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-9n7g5_48743801-c673-4010-931f-62cdb6ecaa61/prometheus-operator/0.log" Nov 25 13:20:22 crc kubenswrapper[4688]: I1125 13:20:22.615632 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s_fb4b1751-6bd8-418b-a987-7314862f08dc/prometheus-operator-admission-webhook/0.log" Nov 25 13:20:22 crc kubenswrapper[4688]: I1125 13:20:22.630976 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk_c4d68b3c-cac2-4e12-ac0c-788a6e134a8a/prometheus-operator-admission-webhook/0.log" Nov 25 13:20:22 crc kubenswrapper[4688]: I1125 13:20:22.804536 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-md9tl_6995a349-20f2-40e7-a7f9-0ee6c8535bd1/perses-operator/0.log" Nov 25 13:20:22 crc kubenswrapper[4688]: I1125 13:20:22.815128 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-fv2lb_603a75fc-30c7-4bcf-98ee-1b24c2c0c93c/operator/0.log" Nov 25 13:20:29 crc kubenswrapper[4688]: I1125 13:20:29.741339 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:20:29 crc kubenswrapper[4688]: E1125 13:20:29.742154 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:20:43 crc kubenswrapper[4688]: E1125 13:20:43.751799 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.159:60512->38.102.83.159:39911: write tcp 38.102.83.159:60512->38.102.83.159:39911: write: broken pipe Nov 25 13:20:44 crc kubenswrapper[4688]: I1125 13:20:44.741007 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:20:44 crc kubenswrapper[4688]: E1125 13:20:44.742083 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:20:59 crc kubenswrapper[4688]: I1125 13:20:59.739932 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:20:59 crc kubenswrapper[4688]: E1125 13:20:59.740846 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:21:12 crc kubenswrapper[4688]: I1125 13:21:12.740140 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:21:12 crc kubenswrapper[4688]: E1125 13:21:12.740951 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:21:26 crc kubenswrapper[4688]: I1125 13:21:26.741622 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:21:26 crc kubenswrapper[4688]: E1125 13:21:26.742930 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:21:39 crc kubenswrapper[4688]: I1125 13:21:39.741370 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:21:39 crc kubenswrapper[4688]: E1125 13:21:39.742274 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:21:53 crc kubenswrapper[4688]: I1125 13:21:53.739949 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:21:53 crc kubenswrapper[4688]: E1125 13:21:53.741059 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:21:54 crc kubenswrapper[4688]: I1125 13:21:54.523933 4688 generic.go:334] "Generic (PLEG): container finished" podID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerID="11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b" exitCode=0 Nov 25 13:21:54 crc kubenswrapper[4688]: I1125 13:21:54.524053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gzgxj/must-gather-n64sb" event={"ID":"f61c0ee9-6f22-4777-b164-2c48769b3b94","Type":"ContainerDied","Data":"11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b"} Nov 25 13:21:54 crc kubenswrapper[4688]: I1125 13:21:54.524895 4688 scope.go:117] "RemoveContainer" containerID="11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b" Nov 25 13:21:54 crc kubenswrapper[4688]: I1125 13:21:54.897265 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gzgxj_must-gather-n64sb_f61c0ee9-6f22-4777-b164-2c48769b3b94/gather/0.log" Nov 25 13:22:02 crc kubenswrapper[4688]: I1125 13:22:02.953339 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gzgxj/must-gather-n64sb"] Nov 25 13:22:02 crc kubenswrapper[4688]: I1125 13:22:02.954124 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gzgxj/must-gather-n64sb" podUID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerName="copy" containerID="cri-o://0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb" gracePeriod=2 Nov 25 13:22:02 crc kubenswrapper[4688]: I1125 13:22:02.970509 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gzgxj/must-gather-n64sb"] Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.479697 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gzgxj_must-gather-n64sb_f61c0ee9-6f22-4777-b164-2c48769b3b94/copy/0.log" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.480442 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.571086 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f61c0ee9-6f22-4777-b164-2c48769b3b94-must-gather-output\") pod \"f61c0ee9-6f22-4777-b164-2c48769b3b94\" (UID: \"f61c0ee9-6f22-4777-b164-2c48769b3b94\") " Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.574730 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfdw4\" (UniqueName: \"kubernetes.io/projected/f61c0ee9-6f22-4777-b164-2c48769b3b94-kube-api-access-tfdw4\") pod \"f61c0ee9-6f22-4777-b164-2c48769b3b94\" (UID: \"f61c0ee9-6f22-4777-b164-2c48769b3b94\") " Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.587430 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61c0ee9-6f22-4777-b164-2c48769b3b94-kube-api-access-tfdw4" (OuterVolumeSpecName: "kube-api-access-tfdw4") pod "f61c0ee9-6f22-4777-b164-2c48769b3b94" (UID: "f61c0ee9-6f22-4777-b164-2c48769b3b94"). InnerVolumeSpecName "kube-api-access-tfdw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.624742 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gzgxj_must-gather-n64sb_f61c0ee9-6f22-4777-b164-2c48769b3b94/copy/0.log" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.625131 4688 generic.go:334] "Generic (PLEG): container finished" podID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerID="0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb" exitCode=143 Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.625186 4688 scope.go:117] "RemoveContainer" containerID="0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.625288 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gzgxj/must-gather-n64sb" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.646291 4688 scope.go:117] "RemoveContainer" containerID="11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.678207 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfdw4\" (UniqueName: \"kubernetes.io/projected/f61c0ee9-6f22-4777-b164-2c48769b3b94-kube-api-access-tfdw4\") on node \"crc\" DevicePath \"\"" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.714886 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61c0ee9-6f22-4777-b164-2c48769b3b94-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f61c0ee9-6f22-4777-b164-2c48769b3b94" (UID: "f61c0ee9-6f22-4777-b164-2c48769b3b94"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.729475 4688 scope.go:117] "RemoveContainer" containerID="0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb" Nov 25 13:22:03 crc kubenswrapper[4688]: E1125 13:22:03.729983 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb\": container with ID starting with 0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb not found: ID does not exist" containerID="0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.730033 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb"} err="failed to get container status \"0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb\": rpc error: code = NotFound desc = could not find container \"0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb\": container with ID starting with 0a3c13baf05b96e7d0511b490ce6efe08c00ff049ec13ee5ad2ac7ce648f22eb not found: ID does not exist" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.730066 4688 scope.go:117] "RemoveContainer" containerID="11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b" Nov 25 13:22:03 crc kubenswrapper[4688]: E1125 13:22:03.730418 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b\": container with ID starting with 11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b not found: ID does not exist" containerID="11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.730448 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b"} err="failed to get container status \"11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b\": rpc error: code = NotFound desc = could not find container \"11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b\": container with ID starting with 11277b47b41fa825c5df57862b94b31fcbbf061ea3c2a062f7d59e8e982fd03b not found: ID does not exist" Nov 25 13:22:03 crc kubenswrapper[4688]: I1125 13:22:03.780769 4688 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f61c0ee9-6f22-4777-b164-2c48769b3b94-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 13:22:04 crc kubenswrapper[4688]: I1125 13:22:04.754176 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61c0ee9-6f22-4777-b164-2c48769b3b94" path="/var/lib/kubelet/pods/f61c0ee9-6f22-4777-b164-2c48769b3b94/volumes" Nov 25 13:22:08 crc kubenswrapper[4688]: I1125 13:22:08.740196 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:22:08 crc kubenswrapper[4688]: E1125 13:22:08.741063 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:22:21 crc kubenswrapper[4688]: I1125 13:22:21.741103 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:22:21 crc kubenswrapper[4688]: E1125 13:22:21.741851 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:22:33 crc kubenswrapper[4688]: I1125 13:22:33.739484 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:22:33 crc kubenswrapper[4688]: E1125 13:22:33.740451 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:22:45 crc kubenswrapper[4688]: I1125 13:22:45.741308 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:22:45 crc kubenswrapper[4688]: E1125 13:22:45.742693 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:22:58 crc kubenswrapper[4688]: I1125 13:22:58.741266 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:22:59 crc kubenswrapper[4688]: I1125 13:22:59.213806 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"77d88e46a36225e058b9406bf0627cacbc3ffa65e3efd7cc35fdce3c5d13c2b6"} Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.193889 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l2hch"] Nov 25 13:23:05 crc kubenswrapper[4688]: E1125 13:23:05.195751 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerName="gather" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.195838 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerName="gather" Nov 25 13:23:05 crc kubenswrapper[4688]: E1125 13:23:05.195920 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a1470a-ae9f-4f7a-a42d-fcb005f88753" containerName="container-00" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.195978 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a1470a-ae9f-4f7a-a42d-fcb005f88753" containerName="container-00" Nov 25 13:23:05 crc kubenswrapper[4688]: E1125 13:23:05.196037 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerName="copy" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.196093 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerName="copy" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.196348 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerName="gather" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.196427 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61c0ee9-6f22-4777-b164-2c48769b3b94" containerName="copy" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.196500 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a1470a-ae9f-4f7a-a42d-fcb005f88753" containerName="container-00" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.198042 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.214986 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2hch"] Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.319418 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-utilities\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.319490 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-catalog-content\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.319691 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzlk9\" (UniqueName: \"kubernetes.io/projected/0e7f110a-341d-4745-ae7e-43327de8dde1-kube-api-access-pzlk9\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.421658 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-utilities\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.421729 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-catalog-content\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.421887 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzlk9\" (UniqueName: \"kubernetes.io/projected/0e7f110a-341d-4745-ae7e-43327de8dde1-kube-api-access-pzlk9\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.422600 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-catalog-content\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.422750 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-utilities\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.445570 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzlk9\" (UniqueName: \"kubernetes.io/projected/0e7f110a-341d-4745-ae7e-43327de8dde1-kube-api-access-pzlk9\") pod \"redhat-operators-l2hch\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:05 crc kubenswrapper[4688]: I1125 13:23:05.519387 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:06 crc kubenswrapper[4688]: I1125 13:23:06.001539 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2hch"] Nov 25 13:23:06 crc kubenswrapper[4688]: I1125 13:23:06.280029 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hch" event={"ID":"0e7f110a-341d-4745-ae7e-43327de8dde1","Type":"ContainerStarted","Data":"959acfd5935d3ef87d9dd0234a309eafc560f9c64c30dd5dbfbe66ffa1105212"} Nov 25 13:23:07 crc kubenswrapper[4688]: I1125 13:23:07.309467 4688 generic.go:334] "Generic (PLEG): container finished" podID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerID="35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb" exitCode=0 Nov 25 13:23:07 crc kubenswrapper[4688]: I1125 13:23:07.310005 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hch" event={"ID":"0e7f110a-341d-4745-ae7e-43327de8dde1","Type":"ContainerDied","Data":"35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb"} Nov 25 13:23:07 crc kubenswrapper[4688]: I1125 13:23:07.316625 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 13:23:08 crc kubenswrapper[4688]: I1125 13:23:08.320232 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hch" event={"ID":"0e7f110a-341d-4745-ae7e-43327de8dde1","Type":"ContainerStarted","Data":"581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225"} Nov 25 13:23:11 crc kubenswrapper[4688]: I1125 13:23:11.349274 4688 generic.go:334] "Generic (PLEG): container finished" podID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerID="581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225" exitCode=0 Nov 25 13:23:11 crc kubenswrapper[4688]: I1125 13:23:11.349383 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hch" event={"ID":"0e7f110a-341d-4745-ae7e-43327de8dde1","Type":"ContainerDied","Data":"581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225"} Nov 25 13:23:12 crc kubenswrapper[4688]: I1125 13:23:12.361433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hch" event={"ID":"0e7f110a-341d-4745-ae7e-43327de8dde1","Type":"ContainerStarted","Data":"5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4"} Nov 25 13:23:12 crc kubenswrapper[4688]: I1125 13:23:12.387568 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l2hch" podStartSLOduration=2.959969551 podStartE2EDuration="7.387551275s" podCreationTimestamp="2025-11-25 13:23:05 +0000 UTC" firstStartedPulling="2025-11-25 13:23:07.314919963 +0000 UTC m=+4137.424548841" lastFinishedPulling="2025-11-25 13:23:11.742501697 +0000 UTC m=+4141.852130565" observedRunningTime="2025-11-25 13:23:12.377842252 +0000 UTC m=+4142.487471120" watchObservedRunningTime="2025-11-25 13:23:12.387551275 +0000 UTC m=+4142.497180143" Nov 25 13:23:15 crc kubenswrapper[4688]: I1125 13:23:15.520348 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:15 crc kubenswrapper[4688]: I1125 13:23:15.520891 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:16 crc kubenswrapper[4688]: I1125 13:23:16.576163 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l2hch" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="registry-server" probeResult="failure" output=< Nov 25 13:23:16 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Nov 25 13:23:16 crc kubenswrapper[4688]: > Nov 25 13:23:25 crc kubenswrapper[4688]: I1125 13:23:25.565356 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:25 crc kubenswrapper[4688]: I1125 13:23:25.616200 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:25 crc kubenswrapper[4688]: I1125 13:23:25.807897 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2hch"] Nov 25 13:23:27 crc kubenswrapper[4688]: I1125 13:23:27.499555 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l2hch" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="registry-server" containerID="cri-o://5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4" gracePeriod=2 Nov 25 13:23:27 crc kubenswrapper[4688]: I1125 13:23:27.972915 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.060260 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzlk9\" (UniqueName: \"kubernetes.io/projected/0e7f110a-341d-4745-ae7e-43327de8dde1-kube-api-access-pzlk9\") pod \"0e7f110a-341d-4745-ae7e-43327de8dde1\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.060392 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-catalog-content\") pod \"0e7f110a-341d-4745-ae7e-43327de8dde1\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.060693 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-utilities\") pod \"0e7f110a-341d-4745-ae7e-43327de8dde1\" (UID: \"0e7f110a-341d-4745-ae7e-43327de8dde1\") " Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.062369 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-utilities" (OuterVolumeSpecName: "utilities") pod "0e7f110a-341d-4745-ae7e-43327de8dde1" (UID: "0e7f110a-341d-4745-ae7e-43327de8dde1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.071937 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7f110a-341d-4745-ae7e-43327de8dde1-kube-api-access-pzlk9" (OuterVolumeSpecName: "kube-api-access-pzlk9") pod "0e7f110a-341d-4745-ae7e-43327de8dde1" (UID: "0e7f110a-341d-4745-ae7e-43327de8dde1"). InnerVolumeSpecName "kube-api-access-pzlk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.163090 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzlk9\" (UniqueName: \"kubernetes.io/projected/0e7f110a-341d-4745-ae7e-43327de8dde1-kube-api-access-pzlk9\") on node \"crc\" DevicePath \"\"" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.163136 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.168341 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e7f110a-341d-4745-ae7e-43327de8dde1" (UID: "0e7f110a-341d-4745-ae7e-43327de8dde1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.265309 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7f110a-341d-4745-ae7e-43327de8dde1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.516237 4688 generic.go:334] "Generic (PLEG): container finished" podID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerID="5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4" exitCode=0 Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.516316 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hch" event={"ID":"0e7f110a-341d-4745-ae7e-43327de8dde1","Type":"ContainerDied","Data":"5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4"} Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.516376 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2hch" event={"ID":"0e7f110a-341d-4745-ae7e-43327de8dde1","Type":"ContainerDied","Data":"959acfd5935d3ef87d9dd0234a309eafc560f9c64c30dd5dbfbe66ffa1105212"} Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.516407 4688 scope.go:117] "RemoveContainer" containerID="5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.516407 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2hch" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.544842 4688 scope.go:117] "RemoveContainer" containerID="581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.563847 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2hch"] Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.576392 4688 scope.go:117] "RemoveContainer" containerID="35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.579273 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l2hch"] Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.613293 4688 scope.go:117] "RemoveContainer" containerID="5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4" Nov 25 13:23:28 crc kubenswrapper[4688]: E1125 13:23:28.614122 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4\": container with ID starting with 5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4 not found: ID does not exist" containerID="5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.614167 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4"} err="failed to get container status \"5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4\": rpc error: code = NotFound desc = could not find container \"5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4\": container with ID starting with 5c3eb7d7e4d42e051a25043dc90431f2bd64f60964bb2d3cdf54897e45da69e4 not found: ID does not exist" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.614198 4688 scope.go:117] "RemoveContainer" containerID="581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225" Nov 25 13:23:28 crc kubenswrapper[4688]: E1125 13:23:28.614718 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225\": container with ID starting with 581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225 not found: ID does not exist" containerID="581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.614742 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225"} err="failed to get container status \"581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225\": rpc error: code = NotFound desc = could not find container \"581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225\": container with ID starting with 581ae4655d0c31d3572b5076e41634d45ac10066c46467c744da99a343fa8225 not found: ID does not exist" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.614758 4688 scope.go:117] "RemoveContainer" containerID="35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb" Nov 25 13:23:28 crc kubenswrapper[4688]: E1125 13:23:28.615042 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb\": container with ID starting with 35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb not found: ID does not exist" containerID="35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.615066 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb"} err="failed to get container status \"35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb\": rpc error: code = NotFound desc = could not find container \"35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb\": container with ID starting with 35ee9955222698c34c74034f17f755c5f4ea83a3fe1257ce654fa301944f63bb not found: ID does not exist" Nov 25 13:23:28 crc kubenswrapper[4688]: I1125 13:23:28.751729 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" path="/var/lib/kubelet/pods/0e7f110a-341d-4745-ae7e-43327de8dde1/volumes" Nov 25 13:23:35 crc kubenswrapper[4688]: I1125 13:23:35.505364 4688 scope.go:117] "RemoveContainer" containerID="907016dd67b8efb2d65de0f50eedfc3043a24b5c798ae05232f7c2a13d784475" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.810243 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7qmqw/must-gather-zmw4z"] Nov 25 13:24:41 crc kubenswrapper[4688]: E1125 13:24:41.811220 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="extract-utilities" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.811233 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="extract-utilities" Nov 25 13:24:41 crc kubenswrapper[4688]: E1125 13:24:41.811248 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="registry-server" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.811254 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="registry-server" Nov 25 13:24:41 crc kubenswrapper[4688]: E1125 13:24:41.811294 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="extract-content" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.811299 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="extract-content" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.811482 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7f110a-341d-4745-ae7e-43327de8dde1" containerName="registry-server" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.812597 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.826632 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7qmqw"/"openshift-service-ca.crt" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.826632 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7qmqw"/"default-dockercfg-ldmlq" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.827243 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7qmqw"/"kube-root-ca.crt" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.851369 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7qmqw/must-gather-zmw4z"] Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.961216 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctff7\" (UniqueName: \"kubernetes.io/projected/2ffce485-f876-4675-91df-28d6832c6604-kube-api-access-ctff7\") pod \"must-gather-zmw4z\" (UID: \"2ffce485-f876-4675-91df-28d6832c6604\") " pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:24:41 crc kubenswrapper[4688]: I1125 13:24:41.961381 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2ffce485-f876-4675-91df-28d6832c6604-must-gather-output\") pod \"must-gather-zmw4z\" (UID: \"2ffce485-f876-4675-91df-28d6832c6604\") " pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:24:42 crc kubenswrapper[4688]: I1125 13:24:42.063175 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2ffce485-f876-4675-91df-28d6832c6604-must-gather-output\") pod \"must-gather-zmw4z\" (UID: \"2ffce485-f876-4675-91df-28d6832c6604\") " pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:24:42 crc kubenswrapper[4688]: I1125 13:24:42.063341 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctff7\" (UniqueName: \"kubernetes.io/projected/2ffce485-f876-4675-91df-28d6832c6604-kube-api-access-ctff7\") pod \"must-gather-zmw4z\" (UID: \"2ffce485-f876-4675-91df-28d6832c6604\") " pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:24:42 crc kubenswrapper[4688]: I1125 13:24:42.064059 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2ffce485-f876-4675-91df-28d6832c6604-must-gather-output\") pod \"must-gather-zmw4z\" (UID: \"2ffce485-f876-4675-91df-28d6832c6604\") " pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:24:42 crc kubenswrapper[4688]: I1125 13:24:42.089886 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctff7\" (UniqueName: \"kubernetes.io/projected/2ffce485-f876-4675-91df-28d6832c6604-kube-api-access-ctff7\") pod \"must-gather-zmw4z\" (UID: \"2ffce485-f876-4675-91df-28d6832c6604\") " pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:24:42 crc kubenswrapper[4688]: I1125 13:24:42.135149 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:24:42 crc kubenswrapper[4688]: I1125 13:24:42.660010 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7qmqw/must-gather-zmw4z"] Nov 25 13:24:43 crc kubenswrapper[4688]: I1125 13:24:43.312835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" event={"ID":"2ffce485-f876-4675-91df-28d6832c6604","Type":"ContainerStarted","Data":"0abb506c8ba63e41a65d9a1f708280fd702ccb96bd05bc0beba4488f341d166f"} Nov 25 13:24:43 crc kubenswrapper[4688]: I1125 13:24:43.313216 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" event={"ID":"2ffce485-f876-4675-91df-28d6832c6604","Type":"ContainerStarted","Data":"6197c352c94d95b1348d36c0e6dbb94a14209fc15378676ec16b80a990a870fb"} Nov 25 13:24:44 crc kubenswrapper[4688]: I1125 13:24:44.322276 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" event={"ID":"2ffce485-f876-4675-91df-28d6832c6604","Type":"ContainerStarted","Data":"b294bb65ffa0f575951ae8313e3b2f87c8ce4fd3618a155fda8c98236e0f9857"} Nov 25 13:24:44 crc kubenswrapper[4688]: I1125 13:24:44.379584 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" podStartSLOduration=3.379562312 podStartE2EDuration="3.379562312s" podCreationTimestamp="2025-11-25 13:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:24:44.366956751 +0000 UTC m=+4234.476585629" watchObservedRunningTime="2025-11-25 13:24:44.379562312 +0000 UTC m=+4234.489191190" Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.377638 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7qmqw/crc-debug-nqjsb"] Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.379324 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.465138 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjfvv\" (UniqueName: \"kubernetes.io/projected/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-kube-api-access-qjfvv\") pod \"crc-debug-nqjsb\" (UID: \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\") " pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.465310 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-host\") pod \"crc-debug-nqjsb\" (UID: \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\") " pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.567911 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-host\") pod \"crc-debug-nqjsb\" (UID: \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\") " pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.568060 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-host\") pod \"crc-debug-nqjsb\" (UID: \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\") " pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.568164 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjfvv\" (UniqueName: \"kubernetes.io/projected/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-kube-api-access-qjfvv\") pod \"crc-debug-nqjsb\" (UID: \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\") " pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.591347 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjfvv\" (UniqueName: \"kubernetes.io/projected/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-kube-api-access-qjfvv\") pod \"crc-debug-nqjsb\" (UID: \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\") " pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:24:47 crc kubenswrapper[4688]: I1125 13:24:47.696971 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:24:47 crc kubenswrapper[4688]: W1125 13:24:47.735987 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbab6e0f8_71c5_4c99_9073_ffdf3843a8a9.slice/crio-e65c111cf31d2b0873e0546c1eeb1fe1bd2f150273ef94991a0be3d2a5f739cf WatchSource:0}: Error finding container e65c111cf31d2b0873e0546c1eeb1fe1bd2f150273ef94991a0be3d2a5f739cf: Status 404 returned error can't find the container with id e65c111cf31d2b0873e0546c1eeb1fe1bd2f150273ef94991a0be3d2a5f739cf Nov 25 13:24:48 crc kubenswrapper[4688]: I1125 13:24:48.368202 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" event={"ID":"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9","Type":"ContainerStarted","Data":"fcdee3ea76a12af35c9c68bc6e2aa1edaa5c84ecf16626531ef175ba26030a64"} Nov 25 13:24:48 crc kubenswrapper[4688]: I1125 13:24:48.368816 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" event={"ID":"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9","Type":"ContainerStarted","Data":"e65c111cf31d2b0873e0546c1eeb1fe1bd2f150273ef94991a0be3d2a5f739cf"} Nov 25 13:24:48 crc kubenswrapper[4688]: I1125 13:24:48.390858 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" podStartSLOduration=1.390833238 podStartE2EDuration="1.390833238s" podCreationTimestamp="2025-11-25 13:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:24:48.382345829 +0000 UTC m=+4238.491974697" watchObservedRunningTime="2025-11-25 13:24:48.390833238 +0000 UTC m=+4238.500462106" Nov 25 13:25:05 crc kubenswrapper[4688]: I1125 13:25:05.518502 4688 generic.go:334] "Generic (PLEG): container finished" podID="bab6e0f8-71c5-4c99-9073-ffdf3843a8a9" containerID="fcdee3ea76a12af35c9c68bc6e2aa1edaa5c84ecf16626531ef175ba26030a64" exitCode=0 Nov 25 13:25:05 crc kubenswrapper[4688]: I1125 13:25:05.518571 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" event={"ID":"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9","Type":"ContainerDied","Data":"fcdee3ea76a12af35c9c68bc6e2aa1edaa5c84ecf16626531ef175ba26030a64"} Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.659383 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.701495 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7qmqw/crc-debug-nqjsb"] Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.710994 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7qmqw/crc-debug-nqjsb"] Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.751559 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjfvv\" (UniqueName: \"kubernetes.io/projected/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-kube-api-access-qjfvv\") pod \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\" (UID: \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\") " Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.751615 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-host\") pod \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\" (UID: \"bab6e0f8-71c5-4c99-9073-ffdf3843a8a9\") " Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.751785 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-host" (OuterVolumeSpecName: "host") pod "bab6e0f8-71c5-4c99-9073-ffdf3843a8a9" (UID: "bab6e0f8-71c5-4c99-9073-ffdf3843a8a9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.752463 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-host\") on node \"crc\" DevicePath \"\"" Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.757053 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-kube-api-access-qjfvv" (OuterVolumeSpecName: "kube-api-access-qjfvv") pod "bab6e0f8-71c5-4c99-9073-ffdf3843a8a9" (UID: "bab6e0f8-71c5-4c99-9073-ffdf3843a8a9"). InnerVolumeSpecName "kube-api-access-qjfvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:25:06 crc kubenswrapper[4688]: I1125 13:25:06.855338 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjfvv\" (UniqueName: \"kubernetes.io/projected/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9-kube-api-access-qjfvv\") on node \"crc\" DevicePath \"\"" Nov 25 13:25:07 crc kubenswrapper[4688]: I1125 13:25:07.539045 4688 scope.go:117] "RemoveContainer" containerID="fcdee3ea76a12af35c9c68bc6e2aa1edaa5c84ecf16626531ef175ba26030a64" Nov 25 13:25:07 crc kubenswrapper[4688]: I1125 13:25:07.539067 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/crc-debug-nqjsb" Nov 25 13:25:07 crc kubenswrapper[4688]: I1125 13:25:07.890622 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7qmqw/crc-debug-948qk"] Nov 25 13:25:07 crc kubenswrapper[4688]: E1125 13:25:07.892355 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab6e0f8-71c5-4c99-9073-ffdf3843a8a9" containerName="container-00" Nov 25 13:25:07 crc kubenswrapper[4688]: I1125 13:25:07.892465 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab6e0f8-71c5-4c99-9073-ffdf3843a8a9" containerName="container-00" Nov 25 13:25:07 crc kubenswrapper[4688]: I1125 13:25:07.892811 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab6e0f8-71c5-4c99-9073-ffdf3843a8a9" containerName="container-00" Nov 25 13:25:07 crc kubenswrapper[4688]: I1125 13:25:07.893767 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:07 crc kubenswrapper[4688]: I1125 13:25:07.977997 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhpk4\" (UniqueName: \"kubernetes.io/projected/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-kube-api-access-bhpk4\") pod \"crc-debug-948qk\" (UID: \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\") " pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:07 crc kubenswrapper[4688]: I1125 13:25:07.978152 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-host\") pod \"crc-debug-948qk\" (UID: \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\") " pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:08 crc kubenswrapper[4688]: I1125 13:25:08.079793 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-host\") pod \"crc-debug-948qk\" (UID: \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\") " pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:08 crc kubenswrapper[4688]: I1125 13:25:08.079915 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhpk4\" (UniqueName: \"kubernetes.io/projected/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-kube-api-access-bhpk4\") pod \"crc-debug-948qk\" (UID: \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\") " pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:08 crc kubenswrapper[4688]: I1125 13:25:08.080296 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-host\") pod \"crc-debug-948qk\" (UID: \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\") " pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:08 crc kubenswrapper[4688]: I1125 13:25:08.105602 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhpk4\" (UniqueName: \"kubernetes.io/projected/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-kube-api-access-bhpk4\") pod \"crc-debug-948qk\" (UID: \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\") " pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:08 crc kubenswrapper[4688]: I1125 13:25:08.210205 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:08 crc kubenswrapper[4688]: I1125 13:25:08.550213 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/crc-debug-948qk" event={"ID":"f7d7f40b-63c6-4b6e-baef-292b0d5d3118","Type":"ContainerStarted","Data":"cb3f53b20496913cce36f76a7541a51f250823c1310c80e2a1337aea0a528bf1"} Nov 25 13:25:08 crc kubenswrapper[4688]: I1125 13:25:08.750123 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab6e0f8-71c5-4c99-9073-ffdf3843a8a9" path="/var/lib/kubelet/pods/bab6e0f8-71c5-4c99-9073-ffdf3843a8a9/volumes" Nov 25 13:25:09 crc kubenswrapper[4688]: I1125 13:25:09.566279 4688 generic.go:334] "Generic (PLEG): container finished" podID="f7d7f40b-63c6-4b6e-baef-292b0d5d3118" containerID="9593fe4934f25ad7111a9e70418041fba63e84d5cff864c0ae1a99b7c989b5e3" exitCode=1 Nov 25 13:25:09 crc kubenswrapper[4688]: I1125 13:25:09.566323 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/crc-debug-948qk" event={"ID":"f7d7f40b-63c6-4b6e-baef-292b0d5d3118","Type":"ContainerDied","Data":"9593fe4934f25ad7111a9e70418041fba63e84d5cff864c0ae1a99b7c989b5e3"} Nov 25 13:25:09 crc kubenswrapper[4688]: I1125 13:25:09.609655 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7qmqw/crc-debug-948qk"] Nov 25 13:25:09 crc kubenswrapper[4688]: I1125 13:25:09.619458 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7qmqw/crc-debug-948qk"] Nov 25 13:25:10 crc kubenswrapper[4688]: I1125 13:25:10.673725 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:10 crc kubenswrapper[4688]: I1125 13:25:10.833122 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhpk4\" (UniqueName: \"kubernetes.io/projected/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-kube-api-access-bhpk4\") pod \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\" (UID: \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\") " Nov 25 13:25:10 crc kubenswrapper[4688]: I1125 13:25:10.833630 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-host\") pod \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\" (UID: \"f7d7f40b-63c6-4b6e-baef-292b0d5d3118\") " Nov 25 13:25:10 crc kubenswrapper[4688]: I1125 13:25:10.833735 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-host" (OuterVolumeSpecName: "host") pod "f7d7f40b-63c6-4b6e-baef-292b0d5d3118" (UID: "f7d7f40b-63c6-4b6e-baef-292b0d5d3118"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 13:25:10 crc kubenswrapper[4688]: I1125 13:25:10.834653 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-host\") on node \"crc\" DevicePath \"\"" Nov 25 13:25:10 crc kubenswrapper[4688]: I1125 13:25:10.838711 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-kube-api-access-bhpk4" (OuterVolumeSpecName: "kube-api-access-bhpk4") pod "f7d7f40b-63c6-4b6e-baef-292b0d5d3118" (UID: "f7d7f40b-63c6-4b6e-baef-292b0d5d3118"). InnerVolumeSpecName "kube-api-access-bhpk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:25:10 crc kubenswrapper[4688]: I1125 13:25:10.936964 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhpk4\" (UniqueName: \"kubernetes.io/projected/f7d7f40b-63c6-4b6e-baef-292b0d5d3118-kube-api-access-bhpk4\") on node \"crc\" DevicePath \"\"" Nov 25 13:25:11 crc kubenswrapper[4688]: I1125 13:25:11.585565 4688 scope.go:117] "RemoveContainer" containerID="9593fe4934f25ad7111a9e70418041fba63e84d5cff864c0ae1a99b7c989b5e3" Nov 25 13:25:11 crc kubenswrapper[4688]: I1125 13:25:11.585952 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/crc-debug-948qk" Nov 25 13:25:12 crc kubenswrapper[4688]: I1125 13:25:12.751293 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d7f40b-63c6-4b6e-baef-292b0d5d3118" path="/var/lib/kubelet/pods/f7d7f40b-63c6-4b6e-baef-292b0d5d3118/volumes" Nov 25 13:25:17 crc kubenswrapper[4688]: I1125 13:25:17.854449 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:25:17 crc kubenswrapper[4688]: I1125 13:25:17.856147 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:25:47 crc kubenswrapper[4688]: I1125 13:25:47.853855 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:25:47 crc kubenswrapper[4688]: I1125 13:25:47.854343 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.630166 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drt2d"] Nov 25 13:25:52 crc kubenswrapper[4688]: E1125 13:25:52.631460 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d7f40b-63c6-4b6e-baef-292b0d5d3118" containerName="container-00" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.631480 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d7f40b-63c6-4b6e-baef-292b0d5d3118" containerName="container-00" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.632289 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d7f40b-63c6-4b6e-baef-292b0d5d3118" containerName="container-00" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.634180 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.643900 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drt2d"] Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.648296 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527rn\" (UniqueName: \"kubernetes.io/projected/7e3d9fc0-9083-4ef8-9263-3e23571ba294-kube-api-access-527rn\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.648453 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-utilities\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.648520 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-catalog-content\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.750459 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-527rn\" (UniqueName: \"kubernetes.io/projected/7e3d9fc0-9083-4ef8-9263-3e23571ba294-kube-api-access-527rn\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.750828 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-utilities\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.750868 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-catalog-content\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.751764 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-utilities\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.752260 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-catalog-content\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.777478 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-527rn\" (UniqueName: \"kubernetes.io/projected/7e3d9fc0-9083-4ef8-9263-3e23571ba294-kube-api-access-527rn\") pod \"certified-operators-drt2d\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:52 crc kubenswrapper[4688]: I1125 13:25:52.966875 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:25:54 crc kubenswrapper[4688]: I1125 13:25:54.103932 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drt2d"] Nov 25 13:25:55 crc kubenswrapper[4688]: I1125 13:25:55.037427 4688 generic.go:334] "Generic (PLEG): container finished" podID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerID="8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549" exitCode=0 Nov 25 13:25:55 crc kubenswrapper[4688]: I1125 13:25:55.037558 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drt2d" event={"ID":"7e3d9fc0-9083-4ef8-9263-3e23571ba294","Type":"ContainerDied","Data":"8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549"} Nov 25 13:25:55 crc kubenswrapper[4688]: I1125 13:25:55.037742 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drt2d" event={"ID":"7e3d9fc0-9083-4ef8-9263-3e23571ba294","Type":"ContainerStarted","Data":"92ac73ab4e16bc1b54a3b808d3f4a1f9b242cd9571cb8d95d6c53e68abbc799f"} Nov 25 13:25:56 crc kubenswrapper[4688]: I1125 13:25:56.050027 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drt2d" event={"ID":"7e3d9fc0-9083-4ef8-9263-3e23571ba294","Type":"ContainerStarted","Data":"78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406"} Nov 25 13:25:58 crc kubenswrapper[4688]: I1125 13:25:58.069626 4688 generic.go:334] "Generic (PLEG): container finished" podID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerID="78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406" exitCode=0 Nov 25 13:25:58 crc kubenswrapper[4688]: I1125 13:25:58.069807 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drt2d" event={"ID":"7e3d9fc0-9083-4ef8-9263-3e23571ba294","Type":"ContainerDied","Data":"78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406"} Nov 25 13:25:59 crc kubenswrapper[4688]: I1125 13:25:59.081490 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drt2d" event={"ID":"7e3d9fc0-9083-4ef8-9263-3e23571ba294","Type":"ContainerStarted","Data":"96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b"} Nov 25 13:25:59 crc kubenswrapper[4688]: I1125 13:25:59.108163 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drt2d" podStartSLOduration=3.677926092 podStartE2EDuration="7.108140217s" podCreationTimestamp="2025-11-25 13:25:52 +0000 UTC" firstStartedPulling="2025-11-25 13:25:55.03913624 +0000 UTC m=+4305.148765108" lastFinishedPulling="2025-11-25 13:25:58.469350355 +0000 UTC m=+4308.578979233" observedRunningTime="2025-11-25 13:25:59.098384173 +0000 UTC m=+4309.208013041" watchObservedRunningTime="2025-11-25 13:25:59.108140217 +0000 UTC m=+4309.217769085" Nov 25 13:26:02 crc kubenswrapper[4688]: I1125 13:26:02.967320 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:26:02 crc kubenswrapper[4688]: I1125 13:26:02.967984 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:26:03 crc kubenswrapper[4688]: I1125 13:26:03.026603 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:26:03 crc kubenswrapper[4688]: I1125 13:26:03.188741 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:26:03 crc kubenswrapper[4688]: I1125 13:26:03.275111 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drt2d"] Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.136090 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drt2d" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerName="registry-server" containerID="cri-o://96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b" gracePeriod=2 Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.644336 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.833079 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-527rn\" (UniqueName: \"kubernetes.io/projected/7e3d9fc0-9083-4ef8-9263-3e23571ba294-kube-api-access-527rn\") pod \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.833357 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-utilities\") pod \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.833403 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-catalog-content\") pod \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\" (UID: \"7e3d9fc0-9083-4ef8-9263-3e23571ba294\") " Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.834512 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-utilities" (OuterVolumeSpecName: "utilities") pod "7e3d9fc0-9083-4ef8-9263-3e23571ba294" (UID: "7e3d9fc0-9083-4ef8-9263-3e23571ba294"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.893259 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e3d9fc0-9083-4ef8-9263-3e23571ba294" (UID: "7e3d9fc0-9083-4ef8-9263-3e23571ba294"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.936054 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:26:05 crc kubenswrapper[4688]: I1125 13:26:05.936367 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e3d9fc0-9083-4ef8-9263-3e23571ba294-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.148902 4688 generic.go:334] "Generic (PLEG): container finished" podID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerID="96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b" exitCode=0 Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.148962 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drt2d" event={"ID":"7e3d9fc0-9083-4ef8-9263-3e23571ba294","Type":"ContainerDied","Data":"96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b"} Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.149001 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drt2d" event={"ID":"7e3d9fc0-9083-4ef8-9263-3e23571ba294","Type":"ContainerDied","Data":"92ac73ab4e16bc1b54a3b808d3f4a1f9b242cd9571cb8d95d6c53e68abbc799f"} Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.149024 4688 scope.go:117] "RemoveContainer" containerID="96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.148969 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drt2d" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.169772 4688 scope.go:117] "RemoveContainer" containerID="78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.252067 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3d9fc0-9083-4ef8-9263-3e23571ba294-kube-api-access-527rn" (OuterVolumeSpecName: "kube-api-access-527rn") pod "7e3d9fc0-9083-4ef8-9263-3e23571ba294" (UID: "7e3d9fc0-9083-4ef8-9263-3e23571ba294"). InnerVolumeSpecName "kube-api-access-527rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.263244 4688 scope.go:117] "RemoveContainer" containerID="8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.344059 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-527rn\" (UniqueName: \"kubernetes.io/projected/7e3d9fc0-9083-4ef8-9263-3e23571ba294-kube-api-access-527rn\") on node \"crc\" DevicePath \"\"" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.354299 4688 scope.go:117] "RemoveContainer" containerID="96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b" Nov 25 13:26:06 crc kubenswrapper[4688]: E1125 13:26:06.354870 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b\": container with ID starting with 96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b not found: ID does not exist" containerID="96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.354910 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b"} err="failed to get container status \"96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b\": rpc error: code = NotFound desc = could not find container \"96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b\": container with ID starting with 96359a6a5b295f77520165d65d3458560f941485a13f1233cd5f386f2e95869b not found: ID does not exist" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.354943 4688 scope.go:117] "RemoveContainer" containerID="78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406" Nov 25 13:26:06 crc kubenswrapper[4688]: E1125 13:26:06.355246 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406\": container with ID starting with 78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406 not found: ID does not exist" containerID="78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.355276 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406"} err="failed to get container status \"78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406\": rpc error: code = NotFound desc = could not find container \"78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406\": container with ID starting with 78a5c7beff9fab7b4e7bc461bc3491552817dace58e36553c5bb708d2a281406 not found: ID does not exist" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.355293 4688 scope.go:117] "RemoveContainer" containerID="8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549" Nov 25 13:26:06 crc kubenswrapper[4688]: E1125 13:26:06.355713 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549\": container with ID starting with 8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549 not found: ID does not exist" containerID="8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.355762 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549"} err="failed to get container status \"8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549\": rpc error: code = NotFound desc = could not find container \"8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549\": container with ID starting with 8e1da2cc3901cdb68d4b69ba1b0cb251d88444bb6854e6ef578950806e214549 not found: ID does not exist" Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.481639 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drt2d"] Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.499055 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drt2d"] Nov 25 13:26:06 crc kubenswrapper[4688]: I1125 13:26:06.762033 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" path="/var/lib/kubelet/pods/7e3d9fc0-9083-4ef8-9263-3e23571ba294/volumes" Nov 25 13:26:10 crc kubenswrapper[4688]: I1125 13:26:10.834588 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_585e8555-4253-49c0-a482-9aefd967e4d2/init-config-reloader/0.log" Nov 25 13:26:10 crc kubenswrapper[4688]: I1125 13:26:10.995388 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_585e8555-4253-49c0-a482-9aefd967e4d2/init-config-reloader/0.log" Nov 25 13:26:10 crc kubenswrapper[4688]: I1125 13:26:10.997961 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_585e8555-4253-49c0-a482-9aefd967e4d2/alertmanager/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.056156 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_585e8555-4253-49c0-a482-9aefd967e4d2/config-reloader/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.198457 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a0483ad2-006e-4eb4-aa60-9fe5c8eed056/aodh-evaluator/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.202894 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a0483ad2-006e-4eb4-aa60-9fe5c8eed056/aodh-api/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.235412 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a0483ad2-006e-4eb4-aa60-9fe5c8eed056/aodh-listener/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.278396 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_a0483ad2-006e-4eb4-aa60-9fe5c8eed056/aodh-notifier/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.377021 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c8499f65b-2b8f7_0a7c7991-8c0b-481a-81f1-62119d1d47e5/barbican-api/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.413104 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c8499f65b-2b8f7_0a7c7991-8c0b-481a-81f1-62119d1d47e5/barbican-api-log/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.565167 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b7fdfcd-rxxwx_414e5c21-70ba-42cc-b382-558d0c95a1ea/barbican-keystone-listener/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.614965 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b7fdfcd-rxxwx_414e5c21-70ba-42cc-b382-558d0c95a1ea/barbican-keystone-listener-log/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.739078 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-75586f5599-pcl94_edbc7ba6-1fa5-418d-a639-3b88eee1c4fb/barbican-worker/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.742950 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-75586f5599-pcl94_edbc7ba6-1fa5-418d-a639-3b88eee1c4fb/barbican-worker-log/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.954214 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_519ebfe6-d6c7-4205-b0f6-23be8445474f/ceilometer-central-agent/0.log" Nov 25 13:26:11 crc kubenswrapper[4688]: I1125 13:26:11.954219 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-7pqs2_9b744290-1dac-4fcf-99d7-6a4a7b2287f6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.077846 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_519ebfe6-d6c7-4205-b0f6-23be8445474f/ceilometer-notification-agent/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.117788 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_519ebfe6-d6c7-4205-b0f6-23be8445474f/proxy-httpd/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.213758 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_519ebfe6-d6c7-4205-b0f6-23be8445474f/sg-core/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.300152 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6a92f2f8-f7d9-42da-8f61-d595c6e2e10b/cinder-api-log/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.371484 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6a92f2f8-f7d9-42da-8f61-d595c6e2e10b/cinder-api/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.487899 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_78027e7c-30ce-4ec6-b928-f9b1836c3568/cinder-scheduler/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.539629 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_78027e7c-30ce-4ec6-b928-f9b1836c3568/probe/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.743903 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-b8pnz_824692a9-2ed3-41c1-a34d-52ae721df261/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.762367 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mrs2m_0decfda5-2230-4d90-bc7c-f641bacb6117/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:12 crc kubenswrapper[4688]: I1125 13:26:12.918505 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ctv5w_26555a3c-6063-42b0-a1ce-18bebfe41afb/init/0.log" Nov 25 13:26:13 crc kubenswrapper[4688]: I1125 13:26:13.141323 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ctv5w_26555a3c-6063-42b0-a1ce-18bebfe41afb/dnsmasq-dns/0.log" Nov 25 13:26:13 crc kubenswrapper[4688]: I1125 13:26:13.174803 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wvvmd_6394a29c-847b-438c-826a-03443a7bb430/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:13 crc kubenswrapper[4688]: I1125 13:26:13.230812 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-ctv5w_26555a3c-6063-42b0-a1ce-18bebfe41afb/init/0.log" Nov 25 13:26:13 crc kubenswrapper[4688]: I1125 13:26:13.372738 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c/glance-httpd/0.log" Nov 25 13:26:13 crc kubenswrapper[4688]: I1125 13:26:13.437307 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fd7cfe17-e95d-4a99-aa5b-4e5c30952f6c/glance-log/0.log" Nov 25 13:26:13 crc kubenswrapper[4688]: I1125 13:26:13.577532 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_274d19d3-bdcd-44c9-b44e-48f97d1dc4f5/glance-httpd/0.log" Nov 25 13:26:13 crc kubenswrapper[4688]: I1125 13:26:13.595648 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_274d19d3-bdcd-44c9-b44e-48f97d1dc4f5/glance-log/0.log" Nov 25 13:26:14 crc kubenswrapper[4688]: I1125 13:26:14.189023 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-ff9c99746-zhh6h_6ed213d5-b2be-4cf1-8416-1ec71b9bb32c/heat-engine/0.log" Nov 25 13:26:14 crc kubenswrapper[4688]: I1125 13:26:14.410037 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7f444b957c-4fqdt_1efc3d47-fd73-4e3c-9357-5fd608383972/heat-cfnapi/0.log" Nov 25 13:26:14 crc kubenswrapper[4688]: I1125 13:26:14.420220 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7d45cc658d-s47zc_ab8d1502-0fe9-44cb-af7e-8466e27f75d4/heat-api/0.log" Nov 25 13:26:14 crc kubenswrapper[4688]: I1125 13:26:14.420975 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fxtc7_f5155f9f-2994-43db-9adc-665613ab1711/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:14 crc kubenswrapper[4688]: I1125 13:26:14.604549 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8ljvl_a432e01c-b2a4-453c-b40b-d8fadf5a1b3b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:14 crc kubenswrapper[4688]: I1125 13:26:14.678540 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401261-jmm7d_a3a1b93f-b907-499e-b150-f2627f93b4b2/keystone-cron/0.log" Nov 25 13:26:14 crc kubenswrapper[4688]: I1125 13:26:14.706188 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-544b4d8674-8x8rj_aaf46f92-56fe-402e-831c-7641bd8dc3d2/keystone-api/0.log" Nov 25 13:26:15 crc kubenswrapper[4688]: I1125 13:26:15.264215 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_95c2802c-7143-4d63-8959-434c04453333/kube-state-metrics/2.log" Nov 25 13:26:15 crc kubenswrapper[4688]: I1125 13:26:15.271549 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_95c2802c-7143-4d63-8959-434c04453333/kube-state-metrics/3.log" Nov 25 13:26:15 crc kubenswrapper[4688]: I1125 13:26:15.281031 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-2lxr9_e884e12f-21a9-42e8-815e-78c0108842d8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:15 crc kubenswrapper[4688]: I1125 13:26:15.496815 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f854495df-t6szb_afe7afb2-5157-4e1b-964f-c402acb02765/neutron-api/0.log" Nov 25 13:26:15 crc kubenswrapper[4688]: I1125 13:26:15.556549 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f854495df-t6szb_afe7afb2-5157-4e1b-964f-c402acb02765/neutron-httpd/0.log" Nov 25 13:26:15 crc kubenswrapper[4688]: I1125 13:26:15.741400 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-rvbt6_7a8f5458-7f30-4fd1-963f-b3619c7f506f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:16 crc kubenswrapper[4688]: I1125 13:26:16.154384 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a46e21bc-c734-4c5d-a16a-27860cb65ab0/nova-api-log/0.log" Nov 25 13:26:16 crc kubenswrapper[4688]: I1125 13:26:16.284236 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7dd18c71-cfd7-4552-ab55-c0f00f1a5c46/nova-cell0-conductor-conductor/0.log" Nov 25 13:26:16 crc kubenswrapper[4688]: I1125 13:26:16.408440 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a46e21bc-c734-4c5d-a16a-27860cb65ab0/nova-api-api/0.log" Nov 25 13:26:16 crc kubenswrapper[4688]: I1125 13:26:16.524040 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_92f1ea60-7d39-4b4f-911e-7dfdffffe38b/nova-cell1-conductor-conductor/0.log" Nov 25 13:26:16 crc kubenswrapper[4688]: I1125 13:26:16.653618 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_db662a26-0b85-4b43-9dcd-8b21fd64c3e9/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.177871 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-z88rx_79315b1a-e9e2-422c-8be3-97ebdb2038c0/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.222233 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ac5d7790-506b-40b0-9721-6cff85ff053e/nova-metadata-log/0.log" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.595046 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_eca13d90-02a1-44cc-88ed-dabdce12144a/nova-scheduler-scheduler/0.log" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.712860 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_666dbc1a-fbdf-4ff1-b949-926ea3e70472/mysql-bootstrap/0.log" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.844931 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_666dbc1a-fbdf-4ff1-b949-926ea3e70472/galera/0.log" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.853350 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.853398 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.853442 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.853966 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77d88e46a36225e058b9406bf0627cacbc3ffa65e3efd7cc35fdce3c5d13c2b6"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.854014 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://77d88e46a36225e058b9406bf0627cacbc3ffa65e3efd7cc35fdce3c5d13c2b6" gracePeriod=600 Nov 25 13:26:17 crc kubenswrapper[4688]: I1125 13:26:17.865099 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_666dbc1a-fbdf-4ff1-b949-926ea3e70472/mysql-bootstrap/0.log" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.045596 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3c9c32d1-459d-4c35-8cf3-876542a657e9/mysql-bootstrap/0.log" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.266568 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="77d88e46a36225e058b9406bf0627cacbc3ffa65e3efd7cc35fdce3c5d13c2b6" exitCode=0 Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.266631 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"77d88e46a36225e058b9406bf0627cacbc3ffa65e3efd7cc35fdce3c5d13c2b6"} Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.266720 4688 scope.go:117] "RemoveContainer" containerID="f553fe058d4e336c4aff91b3d486bfec4dfe9c7a978a189b67c60cab90d55f81" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.325123 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3c9c32d1-459d-4c35-8cf3-876542a657e9/mysql-bootstrap/0.log" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.345267 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3c9c32d1-459d-4c35-8cf3-876542a657e9/galera/0.log" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.506318 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3ecf7482-aefd-4e71-a856-9818296c91e7/openstackclient/0.log" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.635863 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-92hwc_4ac126ff-ac63-4d6a-b201-e6dbd8ba3153/ovn-controller/0.log" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.812805 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sfg7j_cfcc3ad5-018f-4723-bd38-1384baf3d72e/openstack-network-exporter/0.log" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.871157 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ac5d7790-506b-40b0-9721-6cff85ff053e/nova-metadata-metadata/0.log" Nov 25 13:26:18 crc kubenswrapper[4688]: I1125 13:26:18.972039 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nndbf_d6f000f3-dc04-44a4-b019-d41633753240/ovsdb-server-init/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.192310 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nndbf_d6f000f3-dc04-44a4-b019-d41633753240/ovsdb-server-init/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.251166 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nndbf_d6f000f3-dc04-44a4-b019-d41633753240/ovsdb-server/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.266239 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nndbf_d6f000f3-dc04-44a4-b019-d41633753240/ovs-vswitchd/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.278885 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerStarted","Data":"d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4"} Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.475740 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0dd00154-420b-4be7-84de-ea971d680ff3/openstack-network-exporter/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.501720 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dgk5q_ec1f886e-aff6-4077-a770-5bf03fe54bc9/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.576303 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0dd00154-420b-4be7-84de-ea971d680ff3/ovn-northd/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.716058 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_33b0d963-d13d-4b40-b458-b85ec4f10131/openstack-network-exporter/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.804573 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_33b0d963-d13d-4b40-b458-b85ec4f10131/ovsdbserver-nb/0.log" Nov 25 13:26:19 crc kubenswrapper[4688]: I1125 13:26:19.999066 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3644aadc-3c20-41f5-8969-f84b941eef27/openstack-network-exporter/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.035219 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3644aadc-3c20-41f5-8969-f84b941eef27/ovsdbserver-sb/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.071623 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85ddfb974d-m4b6g_e27f5e13-7857-4d61-bbe1-cb74fb57f7d4/placement-api/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.257847 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/init-config-reloader/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.270456 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85ddfb974d-m4b6g_e27f5e13-7857-4d61-bbe1-cb74fb57f7d4/placement-log/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.499016 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/init-config-reloader/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.516969 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/config-reloader/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.519200 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/prometheus/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.534413 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_684150f1-16d8-4c3b-87c0-b1db8df1a115/thanos-sidecar/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.714161 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_31cb28aa-9d13-4a28-b87d-85abb3af9cef/setup-container/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.908021 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_31cb28aa-9d13-4a28-b87d-85abb3af9cef/rabbitmq/0.log" Nov 25 13:26:20 crc kubenswrapper[4688]: I1125 13:26:20.964177 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_31cb28aa-9d13-4a28-b87d-85abb3af9cef/setup-container/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.027499 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_24997c07-a110-43df-accd-9daeeff9a29c/setup-container/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.177228 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_24997c07-a110-43df-accd-9daeeff9a29c/setup-container/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.217149 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_24997c07-a110-43df-accd-9daeeff9a29c/rabbitmq/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.250022 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mgd2k_9463536c-fd6c-4aee-b3a9-c7f20996f5c7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.450266 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8b8pz_567fcefc-5ba1-449d-959c-3209a8d586a9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.525559 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tqx6d_e2224f64-766c-4746-b65f-8e235c609a74/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.675928 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-x8w6p_09188b82-0612-4538-b6ca-7517d7da935b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.785758 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-zk4hf_c3ed5d98-1ee9-4de0-9387-cb3082a348bd/ssh-known-hosts-edpm-deployment/0.log" Nov 25 13:26:21 crc kubenswrapper[4688]: I1125 13:26:21.998883 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5dcbb6d5d7-bpx7g_2c3f8ead-c9ee-4ce5-923a-558a17e1f688/proxy-server/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.043587 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5dcbb6d5d7-bpx7g_2c3f8ead-c9ee-4ce5-923a-558a17e1f688/proxy-httpd/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.123157 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-brb2n_a917ea03-c867-4449-a317-2ed904672efa/swift-ring-rebalance/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.284654 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/account-auditor/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.351637 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/account-reaper/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.440800 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/account-replicator/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.479402 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/account-server/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.561653 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/container-auditor/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.568324 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/container-replicator/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.636234 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/container-server/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.671653 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/container-updater/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.850438 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-auditor/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.862847 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-expirer/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.915395 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-server/0.log" Nov 25 13:26:22 crc kubenswrapper[4688]: I1125 13:26:22.926493 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-replicator/0.log" Nov 25 13:26:23 crc kubenswrapper[4688]: I1125 13:26:23.032098 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/object-updater/0.log" Nov 25 13:26:23 crc kubenswrapper[4688]: I1125 13:26:23.105881 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/rsync/0.log" Nov 25 13:26:23 crc kubenswrapper[4688]: I1125 13:26:23.144036 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_000479f0-0b04-4867-989b-622c2e951f4b/swift-recon-cron/0.log" Nov 25 13:26:23 crc kubenswrapper[4688]: I1125 13:26:23.315366 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-65sz7_34495de9-ab63-49d9-b01f-a07ec58b7a3f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:23 crc kubenswrapper[4688]: I1125 13:26:23.415058 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-hjvqr_1f744c38-f708-44f5-952a-419118bcade4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 13:26:33 crc kubenswrapper[4688]: I1125 13:26:33.806430 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_412ee2a8-6c40-4142-8e09-05f4c22862c0/memcached/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.212041 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/util/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.424197 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/util/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.438043 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/pull/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.461580 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/pull/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.625101 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/extract/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.632432 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/util/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.685034 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4c2d5a3c9e6b489cc280c8ecc6a97197930056790e5820aef2a21902817zgzl_d23f2ee5-379f-4df5-9650-915df314ec2a/pull/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.855744 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-q4ffj_6efe1c76-76a3-4c72-bb71-0963553bbb98/manager/1.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.878261 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-q4ffj_6efe1c76-76a3-4c72-bb71-0963553bbb98/kube-rbac-proxy/0.log" Nov 25 13:26:55 crc kubenswrapper[4688]: I1125 13:26:55.885389 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-q4ffj_6efe1c76-76a3-4c72-bb71-0963553bbb98/manager/2.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.028809 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-ptqrp_87bbdcd1-48cf-4310-9131-93dadc55a0f1/kube-rbac-proxy/0.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.114662 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-ptqrp_87bbdcd1-48cf-4310-9131-93dadc55a0f1/manager/2.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.125068 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-ptqrp_87bbdcd1-48cf-4310-9131-93dadc55a0f1/manager/1.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.246313 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-vkj6d_acc9de1c-caf4-40f2-8e3c-470f1059599a/kube-rbac-proxy/0.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.299803 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-vkj6d_acc9de1c-caf4-40f2-8e3c-470f1059599a/manager/1.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.303091 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-vkj6d_acc9de1c-caf4-40f2-8e3c-470f1059599a/manager/2.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.855462 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-2snng_5c7a1a6d-a3f3-4490-a6ba-f521535a1364/kube-rbac-proxy/0.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.870449 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-2snng_5c7a1a6d-a3f3-4490-a6ba-f521535a1364/manager/1.log" Nov 25 13:26:56 crc kubenswrapper[4688]: I1125 13:26:56.899997 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-2snng_5c7a1a6d-a3f3-4490-a6ba-f521535a1364/manager/2.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.093941 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-b9jdn_92794534-2689-4fde-8597-4cc766d7b3b0/kube-rbac-proxy/0.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.096675 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-b9jdn_92794534-2689-4fde-8597-4cc766d7b3b0/manager/2.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.111692 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-b9jdn_92794534-2689-4fde-8597-4cc766d7b3b0/manager/1.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.231668 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fpp9v"] Nov 25 13:26:57 crc kubenswrapper[4688]: E1125 13:26:57.232211 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerName="extract-content" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.232226 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerName="extract-content" Nov 25 13:26:57 crc kubenswrapper[4688]: E1125 13:26:57.232246 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerName="registry-server" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.232254 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerName="registry-server" Nov 25 13:26:57 crc kubenswrapper[4688]: E1125 13:26:57.232270 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerName="extract-utilities" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.232279 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerName="extract-utilities" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.232607 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3d9fc0-9083-4ef8-9263-3e23571ba294" containerName="registry-server" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.234360 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.250841 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpp9v"] Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.255256 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9crt\" (UniqueName: \"kubernetes.io/projected/2d3aba14-37b8-494d-bb47-34a746a41d3e-kube-api-access-q9crt\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.255335 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-utilities\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.255364 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-catalog-content\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.310288 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zfvn2_55967ae9-2dad-4d45-a8c3-bdaa483f9ea7/manager/2.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.357852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-utilities\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.357937 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-catalog-content\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.358188 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9crt\" (UniqueName: \"kubernetes.io/projected/2d3aba14-37b8-494d-bb47-34a746a41d3e-kube-api-access-q9crt\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.358623 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-utilities\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.358801 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-catalog-content\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.368694 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zfvn2_55967ae9-2dad-4d45-a8c3-bdaa483f9ea7/kube-rbac-proxy/0.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.377363 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-zfvn2_55967ae9-2dad-4d45-a8c3-bdaa483f9ea7/manager/1.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.380676 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9crt\" (UniqueName: \"kubernetes.io/projected/2d3aba14-37b8-494d-bb47-34a746a41d3e-kube-api-access-q9crt\") pod \"redhat-marketplace-fpp9v\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.562407 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.613256 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-q2tdz_0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d/manager/2.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.662505 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-q2tdz_0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d/kube-rbac-proxy/0.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.669940 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-q2tdz_0d5b85ee-6a84-41bb-bc71-1d6ba38ed13d/manager/1.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.953613 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-tn6tq_78451e33-7e86-4635-ac5f-d2c6a9ae6e71/manager/1.log" Nov 25 13:26:57 crc kubenswrapper[4688]: I1125 13:26:57.964141 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-tn6tq_78451e33-7e86-4635-ac5f-d2c6a9ae6e71/kube-rbac-proxy/0.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.120396 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-tn6tq_78451e33-7e86-4635-ac5f-d2c6a9ae6e71/manager/2.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.142188 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpp9v"] Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.231079 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-9qfpp_592ea8b1-efc4-4027-a7dc-3943125fd935/kube-rbac-proxy/0.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.236000 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-9qfpp_592ea8b1-efc4-4027-a7dc-3943125fd935/manager/2.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.360334 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-9qfpp_592ea8b1-efc4-4027-a7dc-3943125fd935/manager/1.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.612721 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-vcnvc_94f12846-9cbe-4997-9160-3545778ecfde/kube-rbac-proxy/0.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.645761 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-vcnvc_94f12846-9cbe-4997-9160-3545778ecfde/manager/2.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.653713 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-vcnvc_94f12846-9cbe-4997-9160-3545778ecfde/manager/1.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.655893 4688 generic.go:334] "Generic (PLEG): container finished" podID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerID="4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c" exitCode=0 Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.655937 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpp9v" event={"ID":"2d3aba14-37b8-494d-bb47-34a746a41d3e","Type":"ContainerDied","Data":"4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c"} Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.655960 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpp9v" event={"ID":"2d3aba14-37b8-494d-bb47-34a746a41d3e","Type":"ContainerStarted","Data":"be1aebc287f8657d93b5aa747fa05d2b40a6f515c91828225c95afadbc64e60c"} Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.838249 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-94snn_7cd9dc7e-be06-416a-aebe-c0b160c79697/manager/2.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.842045 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-94snn_7cd9dc7e-be06-416a-aebe-c0b160c79697/kube-rbac-proxy/0.log" Nov 25 13:26:58 crc kubenswrapper[4688]: I1125 13:26:58.920347 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-94snn_7cd9dc7e-be06-416a-aebe-c0b160c79697/manager/1.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.015915 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-ltlms_808a5b9f-95a2-4f58-abe2-30758a6a7e2a/kube-rbac-proxy/0.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.049292 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-ltlms_808a5b9f-95a2-4f58-abe2-30758a6a7e2a/manager/1.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.056216 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-ltlms_808a5b9f-95a2-4f58-abe2-30758a6a7e2a/manager/2.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.148852 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-kvt5r_e2f91df4-3b39-4c05-9fee-dd3f7622fd13/kube-rbac-proxy/0.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.248157 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-kvt5r_e2f91df4-3b39-4c05-9fee-dd3f7622fd13/manager/2.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.290799 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-kvt5r_e2f91df4-3b39-4c05-9fee-dd3f7622fd13/manager/1.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.388161 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4zlm5_6efa691a-9f05-4d6a-8517-cba5b00426cd/manager/2.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.456414 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4zlm5_6efa691a-9f05-4d6a-8517-cba5b00426cd/kube-rbac-proxy/0.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.525865 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-4zlm5_6efa691a-9f05-4d6a-8517-cba5b00426cd/manager/1.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.583027 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh_7f63e16e-9d9b-4e1a-b497-1417e8e7b79e/kube-rbac-proxy/0.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.663049 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh_7f63e16e-9d9b-4e1a-b497-1417e8e7b79e/manager/1.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.664635 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lfgkh_7f63e16e-9d9b-4e1a-b497-1417e8e7b79e/manager/0.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.667210 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpp9v" event={"ID":"2d3aba14-37b8-494d-bb47-34a746a41d3e","Type":"ContainerStarted","Data":"b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6"} Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.857404 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_1364865a-3285-428d-b672-064400c43c94/manager/2.log" Nov 25 13:26:59 crc kubenswrapper[4688]: I1125 13:26:59.930254 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-9644ff45d-57xk4_34cf6884-a630-417d-81ff-08c5ff19be31/operator/1.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.069829 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-9644ff45d-57xk4_34cf6884-a630-417d-81ff-08c5ff19be31/operator/0.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.179633 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-47kgw_eef7b939-1770-40d0-8ba8-9458f9160a52/registry-server/0.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.319808 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-gzslz_fa49233e-de1b-4bea-85a6-de285e0e60f6/kube-rbac-proxy/0.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.321039 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bdd9b6cb6-vgfmk_1364865a-3285-428d-b672-064400c43c94/manager/3.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.385352 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-gzslz_fa49233e-de1b-4bea-85a6-de285e0e60f6/manager/2.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.398365 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-gzslz_fa49233e-de1b-4bea-85a6-de285e0e60f6/manager/1.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.401922 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8q5q6"] Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.404114 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.418601 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-utilities\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.418663 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69c5c\" (UniqueName: \"kubernetes.io/projected/35438d40-7160-401e-a9e0-81c585bec3d3-kube-api-access-69c5c\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.418730 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-catalog-content\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.421405 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8q5q6"] Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.519832 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-utilities\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.519905 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69c5c\" (UniqueName: \"kubernetes.io/projected/35438d40-7160-401e-a9e0-81c585bec3d3-kube-api-access-69c5c\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.519960 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-catalog-content\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.520671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-catalog-content\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.520780 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-utilities\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.550416 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69c5c\" (UniqueName: \"kubernetes.io/projected/35438d40-7160-401e-a9e0-81c585bec3d3-kube-api-access-69c5c\") pod \"community-operators-8q5q6\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.646171 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-nxnmg_3649a66a-709f-4b77-b798-e5f90eeb2e5d/kube-rbac-proxy/0.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.726729 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-nxnmg_3649a66a-709f-4b77-b798-e5f90eeb2e5d/manager/2.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.788764 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-nxnmg_3649a66a-709f-4b77-b798-e5f90eeb2e5d/manager/3.log" Nov 25 13:27:00 crc kubenswrapper[4688]: I1125 13:27:00.799727 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.017283 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wf6w6_93553656-ef25-4318-81f1-a4e7f973ed38/operator/2.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.084991 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wf6w6_93553656-ef25-4318-81f1-a4e7f973ed38/operator/3.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.215611 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-c76gt_d4c78fcc-139a-4485-8628-dc14422a4710/manager/3.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.254563 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-c76gt_d4c78fcc-139a-4485-8628-dc14422a4710/kube-rbac-proxy/0.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.358330 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-c76gt_d4c78fcc-139a-4485-8628-dc14422a4710/manager/2.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.379228 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/kube-rbac-proxy/0.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.402428 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8q5q6"] Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.551149 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/1.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.608716 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c877c965-jptwb_3f65195f-4002-4d44-a25c-3c2603ed14c6/manager/2.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.617245 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dcnc8_59ac66df-a38a-4193-a6ff-fd4e74b1b113/kube-rbac-proxy/0.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.651745 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dcnc8_59ac66df-a38a-4193-a6ff-fd4e74b1b113/manager/1.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.694684 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q5q6" event={"ID":"35438d40-7160-401e-a9e0-81c585bec3d3","Type":"ContainerStarted","Data":"7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3"} Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.695043 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q5q6" event={"ID":"35438d40-7160-401e-a9e0-81c585bec3d3","Type":"ContainerStarted","Data":"f793d12f0a7d518fba292d9f4367e79ecf21d87ff5edc13c736aefcf1cb373e2"} Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.698172 4688 generic.go:334] "Generic (PLEG): container finished" podID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerID="b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6" exitCode=0 Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.698229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpp9v" event={"ID":"2d3aba14-37b8-494d-bb47-34a746a41d3e","Type":"ContainerDied","Data":"b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6"} Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.788772 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-dcnc8_59ac66df-a38a-4193-a6ff-fd4e74b1b113/manager/0.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.894511 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-gf8vv_ae188502-8c93-4a53-bb69-b9a964c82bc6/manager/1.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.899388 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-gf8vv_ae188502-8c93-4a53-bb69-b9a964c82bc6/kube-rbac-proxy/0.log" Nov 25 13:27:01 crc kubenswrapper[4688]: I1125 13:27:01.901298 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-gf8vv_ae188502-8c93-4a53-bb69-b9a964c82bc6/manager/2.log" Nov 25 13:27:02 crc kubenswrapper[4688]: I1125 13:27:02.719826 4688 generic.go:334] "Generic (PLEG): container finished" podID="35438d40-7160-401e-a9e0-81c585bec3d3" containerID="7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3" exitCode=0 Nov 25 13:27:02 crc kubenswrapper[4688]: I1125 13:27:02.719978 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q5q6" event={"ID":"35438d40-7160-401e-a9e0-81c585bec3d3","Type":"ContainerDied","Data":"7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3"} Nov 25 13:27:02 crc kubenswrapper[4688]: I1125 13:27:02.728859 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpp9v" event={"ID":"2d3aba14-37b8-494d-bb47-34a746a41d3e","Type":"ContainerStarted","Data":"2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2"} Nov 25 13:27:02 crc kubenswrapper[4688]: I1125 13:27:02.774901 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fpp9v" podStartSLOduration=2.249053508 podStartE2EDuration="5.774879206s" podCreationTimestamp="2025-11-25 13:26:57 +0000 UTC" firstStartedPulling="2025-11-25 13:26:58.657608175 +0000 UTC m=+4368.767237043" lastFinishedPulling="2025-11-25 13:27:02.183433873 +0000 UTC m=+4372.293062741" observedRunningTime="2025-11-25 13:27:02.767301381 +0000 UTC m=+4372.876930249" watchObservedRunningTime="2025-11-25 13:27:02.774879206 +0000 UTC m=+4372.884508074" Nov 25 13:27:04 crc kubenswrapper[4688]: I1125 13:27:04.753805 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q5q6" event={"ID":"35438d40-7160-401e-a9e0-81c585bec3d3","Type":"ContainerStarted","Data":"a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735"} Nov 25 13:27:05 crc kubenswrapper[4688]: I1125 13:27:05.764276 4688 generic.go:334] "Generic (PLEG): container finished" podID="35438d40-7160-401e-a9e0-81c585bec3d3" containerID="a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735" exitCode=0 Nov 25 13:27:05 crc kubenswrapper[4688]: I1125 13:27:05.764390 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q5q6" event={"ID":"35438d40-7160-401e-a9e0-81c585bec3d3","Type":"ContainerDied","Data":"a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735"} Nov 25 13:27:06 crc kubenswrapper[4688]: I1125 13:27:06.775982 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q5q6" event={"ID":"35438d40-7160-401e-a9e0-81c585bec3d3","Type":"ContainerStarted","Data":"1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa"} Nov 25 13:27:06 crc kubenswrapper[4688]: I1125 13:27:06.800186 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8q5q6" podStartSLOduration=3.029653411 podStartE2EDuration="6.800166392s" podCreationTimestamp="2025-11-25 13:27:00 +0000 UTC" firstStartedPulling="2025-11-25 13:27:02.721954956 +0000 UTC m=+4372.831583824" lastFinishedPulling="2025-11-25 13:27:06.492467937 +0000 UTC m=+4376.602096805" observedRunningTime="2025-11-25 13:27:06.793385609 +0000 UTC m=+4376.903014477" watchObservedRunningTime="2025-11-25 13:27:06.800166392 +0000 UTC m=+4376.909795260" Nov 25 13:27:07 crc kubenswrapper[4688]: I1125 13:27:07.563980 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:27:07 crc kubenswrapper[4688]: I1125 13:27:07.564335 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:27:07 crc kubenswrapper[4688]: I1125 13:27:07.615952 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:27:07 crc kubenswrapper[4688]: I1125 13:27:07.853408 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:27:09 crc kubenswrapper[4688]: I1125 13:27:09.386706 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpp9v"] Nov 25 13:27:09 crc kubenswrapper[4688]: I1125 13:27:09.802056 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fpp9v" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerName="registry-server" containerID="cri-o://2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2" gracePeriod=2 Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.460650 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.549334 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-catalog-content\") pod \"2d3aba14-37b8-494d-bb47-34a746a41d3e\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.560337 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9crt\" (UniqueName: \"kubernetes.io/projected/2d3aba14-37b8-494d-bb47-34a746a41d3e-kube-api-access-q9crt\") pod \"2d3aba14-37b8-494d-bb47-34a746a41d3e\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.561166 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-utilities\") pod \"2d3aba14-37b8-494d-bb47-34a746a41d3e\" (UID: \"2d3aba14-37b8-494d-bb47-34a746a41d3e\") " Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.562013 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-utilities" (OuterVolumeSpecName: "utilities") pod "2d3aba14-37b8-494d-bb47-34a746a41d3e" (UID: "2d3aba14-37b8-494d-bb47-34a746a41d3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.564145 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.570159 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d3aba14-37b8-494d-bb47-34a746a41d3e-kube-api-access-q9crt" (OuterVolumeSpecName: "kube-api-access-q9crt") pod "2d3aba14-37b8-494d-bb47-34a746a41d3e" (UID: "2d3aba14-37b8-494d-bb47-34a746a41d3e"). InnerVolumeSpecName "kube-api-access-q9crt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.573377 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d3aba14-37b8-494d-bb47-34a746a41d3e" (UID: "2d3aba14-37b8-494d-bb47-34a746a41d3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.666355 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d3aba14-37b8-494d-bb47-34a746a41d3e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.666387 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9crt\" (UniqueName: \"kubernetes.io/projected/2d3aba14-37b8-494d-bb47-34a746a41d3e-kube-api-access-q9crt\") on node \"crc\" DevicePath \"\"" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.801121 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.801187 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.813928 4688 generic.go:334] "Generic (PLEG): container finished" podID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerID="2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2" exitCode=0 Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.813969 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpp9v" event={"ID":"2d3aba14-37b8-494d-bb47-34a746a41d3e","Type":"ContainerDied","Data":"2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2"} Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.813994 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpp9v" event={"ID":"2d3aba14-37b8-494d-bb47-34a746a41d3e","Type":"ContainerDied","Data":"be1aebc287f8657d93b5aa747fa05d2b40a6f515c91828225c95afadbc64e60c"} Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.814012 4688 scope.go:117] "RemoveContainer" containerID="2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.814140 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpp9v" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.842860 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpp9v"] Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.851239 4688 scope.go:117] "RemoveContainer" containerID="b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.853692 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpp9v"] Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.872239 4688 scope.go:117] "RemoveContainer" containerID="4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.873362 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.927908 4688 scope.go:117] "RemoveContainer" containerID="2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2" Nov 25 13:27:10 crc kubenswrapper[4688]: E1125 13:27:10.928393 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2\": container with ID starting with 2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2 not found: ID does not exist" containerID="2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.928441 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2"} err="failed to get container status \"2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2\": rpc error: code = NotFound desc = could not find container \"2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2\": container with ID starting with 2c196944c63777665386b038f540df8fcdf1ec0a8277015f189d54d8eb9e3ae2 not found: ID does not exist" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.928468 4688 scope.go:117] "RemoveContainer" containerID="b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6" Nov 25 13:27:10 crc kubenswrapper[4688]: E1125 13:27:10.928868 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6\": container with ID starting with b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6 not found: ID does not exist" containerID="b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.928912 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6"} err="failed to get container status \"b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6\": rpc error: code = NotFound desc = could not find container \"b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6\": container with ID starting with b599eecb85e70ecd757d43e44483b671ba1cc6b4d24543de80dd34a14ccff6d6 not found: ID does not exist" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.928941 4688 scope.go:117] "RemoveContainer" containerID="4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c" Nov 25 13:27:10 crc kubenswrapper[4688]: E1125 13:27:10.929251 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c\": container with ID starting with 4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c not found: ID does not exist" containerID="4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c" Nov 25 13:27:10 crc kubenswrapper[4688]: I1125 13:27:10.929285 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c"} err="failed to get container status \"4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c\": rpc error: code = NotFound desc = could not find container \"4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c\": container with ID starting with 4791404813b8639987501a0811202db0423a091b70d74ad8b7ee12e9a695b45c not found: ID does not exist" Nov 25 13:27:12 crc kubenswrapper[4688]: I1125 13:27:12.751561 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" path="/var/lib/kubelet/pods/2d3aba14-37b8-494d-bb47-34a746a41d3e/volumes" Nov 25 13:27:20 crc kubenswrapper[4688]: I1125 13:27:20.857956 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:20 crc kubenswrapper[4688]: I1125 13:27:20.914574 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8q5q6"] Nov 25 13:27:20 crc kubenswrapper[4688]: I1125 13:27:20.922119 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8q5q6" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" containerName="registry-server" containerID="cri-o://1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa" gracePeriod=2 Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.453131 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.472589 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69c5c\" (UniqueName: \"kubernetes.io/projected/35438d40-7160-401e-a9e0-81c585bec3d3-kube-api-access-69c5c\") pod \"35438d40-7160-401e-a9e0-81c585bec3d3\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.472724 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-catalog-content\") pod \"35438d40-7160-401e-a9e0-81c585bec3d3\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.472873 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-utilities\") pod \"35438d40-7160-401e-a9e0-81c585bec3d3\" (UID: \"35438d40-7160-401e-a9e0-81c585bec3d3\") " Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.473734 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-utilities" (OuterVolumeSpecName: "utilities") pod "35438d40-7160-401e-a9e0-81c585bec3d3" (UID: "35438d40-7160-401e-a9e0-81c585bec3d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.478893 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35438d40-7160-401e-a9e0-81c585bec3d3-kube-api-access-69c5c" (OuterVolumeSpecName: "kube-api-access-69c5c") pod "35438d40-7160-401e-a9e0-81c585bec3d3" (UID: "35438d40-7160-401e-a9e0-81c585bec3d3"). InnerVolumeSpecName "kube-api-access-69c5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.535921 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35438d40-7160-401e-a9e0-81c585bec3d3" (UID: "35438d40-7160-401e-a9e0-81c585bec3d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.575432 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69c5c\" (UniqueName: \"kubernetes.io/projected/35438d40-7160-401e-a9e0-81c585bec3d3-kube-api-access-69c5c\") on node \"crc\" DevicePath \"\"" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.575730 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.575861 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35438d40-7160-401e-a9e0-81c585bec3d3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.934164 4688 generic.go:334] "Generic (PLEG): container finished" podID="35438d40-7160-401e-a9e0-81c585bec3d3" containerID="1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa" exitCode=0 Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.934226 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8q5q6" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.935263 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q5q6" event={"ID":"35438d40-7160-401e-a9e0-81c585bec3d3","Type":"ContainerDied","Data":"1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa"} Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.935385 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q5q6" event={"ID":"35438d40-7160-401e-a9e0-81c585bec3d3","Type":"ContainerDied","Data":"f793d12f0a7d518fba292d9f4367e79ecf21d87ff5edc13c736aefcf1cb373e2"} Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.935412 4688 scope.go:117] "RemoveContainer" containerID="1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.959995 4688 scope.go:117] "RemoveContainer" containerID="a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.977140 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8q5q6"] Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.982742 4688 scope.go:117] "RemoveContainer" containerID="7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3" Nov 25 13:27:21 crc kubenswrapper[4688]: I1125 13:27:21.985931 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8q5q6"] Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.037747 4688 scope.go:117] "RemoveContainer" containerID="1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa" Nov 25 13:27:22 crc kubenswrapper[4688]: E1125 13:27:22.038231 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa\": container with ID starting with 1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa not found: ID does not exist" containerID="1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.038342 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa"} err="failed to get container status \"1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa\": rpc error: code = NotFound desc = could not find container \"1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa\": container with ID starting with 1403443f4739df3514cbc6373a99f91b260a4d559ff6bd569ee6a8642df893fa not found: ID does not exist" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.038440 4688 scope.go:117] "RemoveContainer" containerID="a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735" Nov 25 13:27:22 crc kubenswrapper[4688]: E1125 13:27:22.038772 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735\": container with ID starting with a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735 not found: ID does not exist" containerID="a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.038871 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735"} err="failed to get container status \"a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735\": rpc error: code = NotFound desc = could not find container \"a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735\": container with ID starting with a8c09b8208d0bdb996b4e08efbdb204ccb921ba2d8835f50cb614f5f37c8e735 not found: ID does not exist" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.038959 4688 scope.go:117] "RemoveContainer" containerID="7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3" Nov 25 13:27:22 crc kubenswrapper[4688]: E1125 13:27:22.039239 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3\": container with ID starting with 7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3 not found: ID does not exist" containerID="7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.039659 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3"} err="failed to get container status \"7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3\": rpc error: code = NotFound desc = could not find container \"7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3\": container with ID starting with 7b418e8639360eb0f4830cb4156ff9dabc467e1c9c1ffa955ae93ea89505e0e3 not found: ID does not exist" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.152561 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qm98m_486d0cf3-7cf3-42fe-a5e8-1c57e878bf0c/control-plane-machine-set-operator/0.log" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.343497 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8djqh_199bf3df-657c-4fec-99c8-00abf00d41c0/kube-rbac-proxy/0.log" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.373674 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8djqh_199bf3df-657c-4fec-99c8-00abf00d41c0/machine-api-operator/0.log" Nov 25 13:27:22 crc kubenswrapper[4688]: I1125 13:27:22.760750 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" path="/var/lib/kubelet/pods/35438d40-7160-401e-a9e0-81c585bec3d3/volumes" Nov 25 13:27:36 crc kubenswrapper[4688]: I1125 13:27:36.317764 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jhcbt_8f855a3c-ac32-447f-8fca-8228aa44f91a/cert-manager-controller/0.log" Nov 25 13:27:36 crc kubenswrapper[4688]: I1125 13:27:36.516166 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-d2tx4_3d15c9cb-bc3f-4042-a05e-1a6e66e4348c/cert-manager-cainjector/1.log" Nov 25 13:27:36 crc kubenswrapper[4688]: I1125 13:27:36.530058 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-d2tx4_3d15c9cb-bc3f-4042-a05e-1a6e66e4348c/cert-manager-cainjector/0.log" Nov 25 13:27:36 crc kubenswrapper[4688]: I1125 13:27:36.584517 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-f4bkk_e46b8030-eb14-4ce3-9519-fdaf23f4f7cb/cert-manager-webhook/0.log" Nov 25 13:27:48 crc kubenswrapper[4688]: I1125 13:27:48.932924 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-h8w7b_6bd5cb4b-20c0-4042-b348-001e8084c2f4/nmstate-console-plugin/0.log" Nov 25 13:27:49 crc kubenswrapper[4688]: I1125 13:27:49.123642 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-8pxc2_d63a65a2-b47a-49e4-8489-f7aee9d6929d/kube-rbac-proxy/0.log" Nov 25 13:27:49 crc kubenswrapper[4688]: I1125 13:27:49.131665 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vc899_48a378b8-e17a-41c3-b612-a9c503dcbc58/nmstate-handler/0.log" Nov 25 13:27:49 crc kubenswrapper[4688]: I1125 13:27:49.212191 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-8pxc2_d63a65a2-b47a-49e4-8489-f7aee9d6929d/nmstate-metrics/0.log" Nov 25 13:27:49 crc kubenswrapper[4688]: I1125 13:27:49.356407 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-l75hn_c08e5e0c-e882-43da-8211-ab86d099db71/nmstate-operator/0.log" Nov 25 13:27:49 crc kubenswrapper[4688]: I1125 13:27:49.417129 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-xjfsz_af18cbb6-5f3d-4fa6-914a-421fe283aa4e/nmstate-webhook/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.222050 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-8kfck_e48219fb-3aae-42ff-8dec-3d952e97aff1/kube-rbac-proxy/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.472972 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-frr-files/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.480100 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-8kfck_e48219fb-3aae-42ff-8dec-3d952e97aff1/controller/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.667661 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-frr-files/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.668992 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-reloader/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.676363 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-metrics/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.692061 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-reloader/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.878189 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-metrics/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.879182 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-reloader/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.901696 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-metrics/0.log" Nov 25 13:28:04 crc kubenswrapper[4688]: I1125 13:28:04.929191 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-frr-files/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.114199 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-frr-files/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.129776 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-metrics/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.141138 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/cp-reloader/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.183216 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/controller/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.297480 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/frr-metrics/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.310328 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/kube-rbac-proxy/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.402871 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/kube-rbac-proxy-frr/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.567971 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/reloader/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.634246 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-bdmv9_a8a77cbc-9814-4996-9ee3-d1e63f581842/frr-k8s-webhook-server/0.log" Nov 25 13:28:05 crc kubenswrapper[4688]: I1125 13:28:05.819145 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-744bc4ddc8-58c5m_d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3/manager/3.log" Nov 25 13:28:06 crc kubenswrapper[4688]: I1125 13:28:06.294303 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-744bc4ddc8-58c5m_d664a9c5-3ebd-49be-84d0-eb05c2b8e7b3/manager/2.log" Nov 25 13:28:06 crc kubenswrapper[4688]: I1125 13:28:06.315148 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-94654dbc4-7h22l_29f74196-0858-470b-8d69-2a8c67753827/webhook-server/0.log" Nov 25 13:28:06 crc kubenswrapper[4688]: I1125 13:28:06.615053 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lthrr_4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7/kube-rbac-proxy/0.log" Nov 25 13:28:06 crc kubenswrapper[4688]: I1125 13:28:06.919506 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rbtk_6fd1ba9a-30d9-4209-8fc8-0f2b22fb45f4/frr/0.log" Nov 25 13:28:07 crc kubenswrapper[4688]: I1125 13:28:07.234885 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lthrr_4eab9fc3-d9f4-4d77-ba40-1ef9bd4bddd7/speaker/0.log" Nov 25 13:28:19 crc kubenswrapper[4688]: I1125 13:28:19.360289 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/util/0.log" Nov 25 13:28:19 crc kubenswrapper[4688]: I1125 13:28:19.497758 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/util/0.log" Nov 25 13:28:19 crc kubenswrapper[4688]: I1125 13:28:19.555901 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/pull/0.log" Nov 25 13:28:19 crc kubenswrapper[4688]: I1125 13:28:19.576902 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/pull/0.log" Nov 25 13:28:19 crc kubenswrapper[4688]: I1125 13:28:19.724660 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/pull/0.log" Nov 25 13:28:19 crc kubenswrapper[4688]: I1125 13:28:19.731384 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/util/0.log" Nov 25 13:28:19 crc kubenswrapper[4688]: I1125 13:28:19.742257 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e7gsjr_8e87d6d2-0104-4184-a6ca-8bc371a7e768/extract/0.log" Nov 25 13:28:19 crc kubenswrapper[4688]: I1125 13:28:19.886718 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/util/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.078669 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/util/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.111355 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/pull/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.118456 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/pull/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.267702 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/util/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.287717 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/extract/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.306969 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b4kjw_f264cf40-eeb7-48d8-93d5-af0c6953390e/pull/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.488380 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-utilities/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.662418 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-content/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.677537 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-utilities/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.713264 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-content/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.865047 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-content/0.log" Nov 25 13:28:20 crc kubenswrapper[4688]: I1125 13:28:20.868040 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/extract-utilities/0.log" Nov 25 13:28:21 crc kubenswrapper[4688]: I1125 13:28:21.068864 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-utilities/0.log" Nov 25 13:28:21 crc kubenswrapper[4688]: I1125 13:28:21.388657 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-utilities/0.log" Nov 25 13:28:21 crc kubenswrapper[4688]: I1125 13:28:21.415403 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-content/0.log" Nov 25 13:28:21 crc kubenswrapper[4688]: I1125 13:28:21.429549 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-content/0.log" Nov 25 13:28:21 crc kubenswrapper[4688]: I1125 13:28:21.651794 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-content/0.log" Nov 25 13:28:21 crc kubenswrapper[4688]: I1125 13:28:21.658737 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/extract-utilities/0.log" Nov 25 13:28:21 crc kubenswrapper[4688]: I1125 13:28:21.956386 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/util/0.log" Nov 25 13:28:21 crc kubenswrapper[4688]: I1125 13:28:21.985380 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xfkp8_ca8b02cb-547f-4625-ada1-6ce7d2cb8d7f/registry-server/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.165532 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/util/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.245429 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/pull/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.300216 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/pull/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.464618 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/util/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.540189 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/extract/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.540488 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lq2s2_9012fbba-8b92-4bbe-88ec-1ac46a53ce34/pull/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.678010 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vxpfw_ef2dd753-bae0-4992-ad54-4fd56d590f82/registry-server/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.756970 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ps6lt_499bbc68-a6dd-4670-acef-2dfcce904fc3/marketplace-operator/0.log" Nov 25 13:28:22 crc kubenswrapper[4688]: I1125 13:28:22.908716 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-utilities/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.034788 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-content/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.050206 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-utilities/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.091533 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-content/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.224771 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-utilities/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.235629 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/extract-content/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.277200 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xfgrd_25ba3607-a4a2-4bc5-8835-980a9ff6526f/registry-server/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.283546 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-utilities/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.493676 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-content/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.493773 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-content/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.503997 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-utilities/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.633925 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-content/0.log" Nov 25 13:28:23 crc kubenswrapper[4688]: I1125 13:28:23.669414 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/extract-utilities/0.log" Nov 25 13:28:24 crc kubenswrapper[4688]: I1125 13:28:24.265619 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-49974_2c2211fa-01ae-43b9-9a3a-273b2e1d79ed/registry-server/0.log" Nov 25 13:28:37 crc kubenswrapper[4688]: I1125 13:28:37.276020 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-9n7g5_48743801-c673-4010-931f-62cdb6ecaa61/prometheus-operator/0.log" Nov 25 13:28:37 crc kubenswrapper[4688]: I1125 13:28:37.673646 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-857dd5c86f-9qcpk_c4d68b3c-cac2-4e12-ac0c-788a6e134a8a/prometheus-operator-admission-webhook/0.log" Nov 25 13:28:37 crc kubenswrapper[4688]: I1125 13:28:37.716167 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-857dd5c86f-p8j9s_fb4b1751-6bd8-418b-a987-7314862f08dc/prometheus-operator-admission-webhook/0.log" Nov 25 13:28:37 crc kubenswrapper[4688]: I1125 13:28:37.983133 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-md9tl_6995a349-20f2-40e7-a7f9-0ee6c8535bd1/perses-operator/0.log" Nov 25 13:28:37 crc kubenswrapper[4688]: I1125 13:28:37.988054 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-fv2lb_603a75fc-30c7-4bcf-98ee-1b24c2c0c93c/operator/0.log" Nov 25 13:28:47 crc kubenswrapper[4688]: I1125 13:28:47.854019 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:28:47 crc kubenswrapper[4688]: I1125 13:28:47.854602 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:28:52 crc kubenswrapper[4688]: E1125 13:28:52.806496 4688 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.159:45080->38.102.83.159:39911: read tcp 38.102.83.159:45080->38.102.83.159:39911: read: connection reset by peer Nov 25 13:29:17 crc kubenswrapper[4688]: I1125 13:29:17.854019 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:29:17 crc kubenswrapper[4688]: I1125 13:29:17.854565 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:29:47 crc kubenswrapper[4688]: I1125 13:29:47.854593 4688 patch_prober.go:28] interesting pod/machine-config-daemon-6pql6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 13:29:47 crc kubenswrapper[4688]: I1125 13:29:47.855248 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 13:29:47 crc kubenswrapper[4688]: I1125 13:29:47.855306 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" Nov 25 13:29:47 crc kubenswrapper[4688]: I1125 13:29:47.856213 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4"} pod="openshift-machine-config-operator/machine-config-daemon-6pql6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 13:29:47 crc kubenswrapper[4688]: I1125 13:29:47.856294 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerName="machine-config-daemon" containerID="cri-o://d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" gracePeriod=600 Nov 25 13:29:47 crc kubenswrapper[4688]: E1125 13:29:47.993849 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:29:48 crc kubenswrapper[4688]: I1125 13:29:48.464803 4688 generic.go:334] "Generic (PLEG): container finished" podID="c1fd8b76-41b5-4979-be54-9c7441c21aca" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" exitCode=0 Nov 25 13:29:48 crc kubenswrapper[4688]: I1125 13:29:48.465136 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" event={"ID":"c1fd8b76-41b5-4979-be54-9c7441c21aca","Type":"ContainerDied","Data":"d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4"} Nov 25 13:29:48 crc kubenswrapper[4688]: I1125 13:29:48.465175 4688 scope.go:117] "RemoveContainer" containerID="77d88e46a36225e058b9406bf0627cacbc3ffa65e3efd7cc35fdce3c5d13c2b6" Nov 25 13:29:48 crc kubenswrapper[4688]: I1125 13:29:48.465892 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:29:48 crc kubenswrapper[4688]: E1125 13:29:48.466274 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:29:59 crc kubenswrapper[4688]: I1125 13:29:59.740962 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:29:59 crc kubenswrapper[4688]: E1125 13:29:59.741789 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.178455 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd"] Nov 25 13:30:00 crc kubenswrapper[4688]: E1125 13:30:00.179387 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerName="registry-server" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.179413 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerName="registry-server" Nov 25 13:30:00 crc kubenswrapper[4688]: E1125 13:30:00.179444 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" containerName="extract-content" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.179453 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" containerName="extract-content" Nov 25 13:30:00 crc kubenswrapper[4688]: E1125 13:30:00.179468 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerName="extract-utilities" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.179477 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerName="extract-utilities" Nov 25 13:30:00 crc kubenswrapper[4688]: E1125 13:30:00.179514 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerName="extract-content" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.179682 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerName="extract-content" Nov 25 13:30:00 crc kubenswrapper[4688]: E1125 13:30:00.179746 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" containerName="extract-utilities" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.179754 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" containerName="extract-utilities" Nov 25 13:30:00 crc kubenswrapper[4688]: E1125 13:30:00.179773 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" containerName="registry-server" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.179782 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" containerName="registry-server" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.180052 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3aba14-37b8-494d-bb47-34a746a41d3e" containerName="registry-server" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.180105 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="35438d40-7160-401e-a9e0-81c585bec3d3" containerName="registry-server" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.181065 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.183794 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.183867 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.203682 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd"] Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.331162 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjdx\" (UniqueName: \"kubernetes.io/projected/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-kube-api-access-ltjdx\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.331654 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-secret-volume\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.331855 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-config-volume\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.433749 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-secret-volume\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.433872 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-config-volume\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.433905 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjdx\" (UniqueName: \"kubernetes.io/projected/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-kube-api-access-ltjdx\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.434931 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-config-volume\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.443491 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-secret-volume\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.462386 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjdx\" (UniqueName: \"kubernetes.io/projected/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-kube-api-access-ltjdx\") pod \"collect-profiles-29401290-gqvkd\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.504450 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:00 crc kubenswrapper[4688]: I1125 13:30:00.980374 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd"] Nov 25 13:30:01 crc kubenswrapper[4688]: I1125 13:30:01.614920 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" event={"ID":"bab21c74-bf6d-46c5-8a73-34f0f5cafd35","Type":"ContainerStarted","Data":"9843dd33862b3175bb460dfd8a4b24d671b5fe61acbbace9b2ea2054b90c71df"} Nov 25 13:30:01 crc kubenswrapper[4688]: I1125 13:30:01.615230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" event={"ID":"bab21c74-bf6d-46c5-8a73-34f0f5cafd35","Type":"ContainerStarted","Data":"831f9cbd276e83c4f880a669f5efdab7cecd002d89ebd5397c376e1b327341de"} Nov 25 13:30:01 crc kubenswrapper[4688]: I1125 13:30:01.630693 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" podStartSLOduration=1.6306782389999999 podStartE2EDuration="1.630678239s" podCreationTimestamp="2025-11-25 13:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 13:30:01.629498477 +0000 UTC m=+4551.739127345" watchObservedRunningTime="2025-11-25 13:30:01.630678239 +0000 UTC m=+4551.740307117" Nov 25 13:30:02 crc kubenswrapper[4688]: I1125 13:30:02.626750 4688 generic.go:334] "Generic (PLEG): container finished" podID="bab21c74-bf6d-46c5-8a73-34f0f5cafd35" containerID="9843dd33862b3175bb460dfd8a4b24d671b5fe61acbbace9b2ea2054b90c71df" exitCode=0 Nov 25 13:30:02 crc kubenswrapper[4688]: I1125 13:30:02.626862 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" event={"ID":"bab21c74-bf6d-46c5-8a73-34f0f5cafd35","Type":"ContainerDied","Data":"9843dd33862b3175bb460dfd8a4b24d671b5fe61acbbace9b2ea2054b90c71df"} Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.029843 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.126737 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-config-volume\") pod \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.126927 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltjdx\" (UniqueName: \"kubernetes.io/projected/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-kube-api-access-ltjdx\") pod \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.126995 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-secret-volume\") pod \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\" (UID: \"bab21c74-bf6d-46c5-8a73-34f0f5cafd35\") " Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.127824 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-config-volume" (OuterVolumeSpecName: "config-volume") pod "bab21c74-bf6d-46c5-8a73-34f0f5cafd35" (UID: "bab21c74-bf6d-46c5-8a73-34f0f5cafd35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.138697 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-kube-api-access-ltjdx" (OuterVolumeSpecName: "kube-api-access-ltjdx") pod "bab21c74-bf6d-46c5-8a73-34f0f5cafd35" (UID: "bab21c74-bf6d-46c5-8a73-34f0f5cafd35"). InnerVolumeSpecName "kube-api-access-ltjdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.138963 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bab21c74-bf6d-46c5-8a73-34f0f5cafd35" (UID: "bab21c74-bf6d-46c5-8a73-34f0f5cafd35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.229285 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.229326 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltjdx\" (UniqueName: \"kubernetes.io/projected/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-kube-api-access-ltjdx\") on node \"crc\" DevicePath \"\"" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.229342 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bab21c74-bf6d-46c5-8a73-34f0f5cafd35-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.647493 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" event={"ID":"bab21c74-bf6d-46c5-8a73-34f0f5cafd35","Type":"ContainerDied","Data":"831f9cbd276e83c4f880a669f5efdab7cecd002d89ebd5397c376e1b327341de"} Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.647560 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="831f9cbd276e83c4f880a669f5efdab7cecd002d89ebd5397c376e1b327341de" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.647584 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401290-gqvkd" Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.721515 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w"] Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.733560 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401245-tgl9w"] Nov 25 13:30:04 crc kubenswrapper[4688]: I1125 13:30:04.753207 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6910360d-8cc4-4472-898d-c08254b8575d" path="/var/lib/kubelet/pods/6910360d-8cc4-4472-898d-c08254b8575d/volumes" Nov 25 13:30:11 crc kubenswrapper[4688]: I1125 13:30:11.739987 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:30:11 crc kubenswrapper[4688]: E1125 13:30:11.741009 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:30:12 crc kubenswrapper[4688]: I1125 13:30:12.750889 4688 generic.go:334] "Generic (PLEG): container finished" podID="2ffce485-f876-4675-91df-28d6832c6604" containerID="0abb506c8ba63e41a65d9a1f708280fd702ccb96bd05bc0beba4488f341d166f" exitCode=0 Nov 25 13:30:12 crc kubenswrapper[4688]: I1125 13:30:12.751312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" event={"ID":"2ffce485-f876-4675-91df-28d6832c6604","Type":"ContainerDied","Data":"0abb506c8ba63e41a65d9a1f708280fd702ccb96bd05bc0beba4488f341d166f"} Nov 25 13:30:12 crc kubenswrapper[4688]: I1125 13:30:12.751975 4688 scope.go:117] "RemoveContainer" containerID="0abb506c8ba63e41a65d9a1f708280fd702ccb96bd05bc0beba4488f341d166f" Nov 25 13:30:13 crc kubenswrapper[4688]: I1125 13:30:13.356684 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7qmqw_must-gather-zmw4z_2ffce485-f876-4675-91df-28d6832c6604/gather/0.log" Nov 25 13:30:23 crc kubenswrapper[4688]: I1125 13:30:23.591362 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7qmqw/must-gather-zmw4z"] Nov 25 13:30:23 crc kubenswrapper[4688]: I1125 13:30:23.592164 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" podUID="2ffce485-f876-4675-91df-28d6832c6604" containerName="copy" containerID="cri-o://b294bb65ffa0f575951ae8313e3b2f87c8ce4fd3618a155fda8c98236e0f9857" gracePeriod=2 Nov 25 13:30:23 crc kubenswrapper[4688]: I1125 13:30:23.605198 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7qmqw/must-gather-zmw4z"] Nov 25 13:30:23 crc kubenswrapper[4688]: I1125 13:30:23.853488 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7qmqw_must-gather-zmw4z_2ffce485-f876-4675-91df-28d6832c6604/copy/0.log" Nov 25 13:30:23 crc kubenswrapper[4688]: I1125 13:30:23.855646 4688 generic.go:334] "Generic (PLEG): container finished" podID="2ffce485-f876-4675-91df-28d6832c6604" containerID="b294bb65ffa0f575951ae8313e3b2f87c8ce4fd3618a155fda8c98236e0f9857" exitCode=143 Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.769901 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7qmqw_must-gather-zmw4z_2ffce485-f876-4675-91df-28d6832c6604/copy/0.log" Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.771433 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.853255 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctff7\" (UniqueName: \"kubernetes.io/projected/2ffce485-f876-4675-91df-28d6832c6604-kube-api-access-ctff7\") pod \"2ffce485-f876-4675-91df-28d6832c6604\" (UID: \"2ffce485-f876-4675-91df-28d6832c6604\") " Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.853407 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2ffce485-f876-4675-91df-28d6832c6604-must-gather-output\") pod \"2ffce485-f876-4675-91df-28d6832c6604\" (UID: \"2ffce485-f876-4675-91df-28d6832c6604\") " Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.861984 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffce485-f876-4675-91df-28d6832c6604-kube-api-access-ctff7" (OuterVolumeSpecName: "kube-api-access-ctff7") pod "2ffce485-f876-4675-91df-28d6832c6604" (UID: "2ffce485-f876-4675-91df-28d6832c6604"). InnerVolumeSpecName "kube-api-access-ctff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.876652 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7qmqw_must-gather-zmw4z_2ffce485-f876-4675-91df-28d6832c6604/copy/0.log" Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.877279 4688 scope.go:117] "RemoveContainer" containerID="b294bb65ffa0f575951ae8313e3b2f87c8ce4fd3618a155fda8c98236e0f9857" Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.877798 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7qmqw/must-gather-zmw4z" Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.920370 4688 scope.go:117] "RemoveContainer" containerID="0abb506c8ba63e41a65d9a1f708280fd702ccb96bd05bc0beba4488f341d166f" Nov 25 13:30:24 crc kubenswrapper[4688]: I1125 13:30:24.960607 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctff7\" (UniqueName: \"kubernetes.io/projected/2ffce485-f876-4675-91df-28d6832c6604-kube-api-access-ctff7\") on node \"crc\" DevicePath \"\"" Nov 25 13:30:25 crc kubenswrapper[4688]: I1125 13:30:25.037200 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffce485-f876-4675-91df-28d6832c6604-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2ffce485-f876-4675-91df-28d6832c6604" (UID: "2ffce485-f876-4675-91df-28d6832c6604"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:30:25 crc kubenswrapper[4688]: I1125 13:30:25.063863 4688 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2ffce485-f876-4675-91df-28d6832c6604-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 13:30:26 crc kubenswrapper[4688]: I1125 13:30:26.740615 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:30:26 crc kubenswrapper[4688]: E1125 13:30:26.741190 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:30:26 crc kubenswrapper[4688]: I1125 13:30:26.750639 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ffce485-f876-4675-91df-28d6832c6604" path="/var/lib/kubelet/pods/2ffce485-f876-4675-91df-28d6832c6604/volumes" Nov 25 13:30:35 crc kubenswrapper[4688]: I1125 13:30:35.767246 4688 scope.go:117] "RemoveContainer" containerID="75cbe4d64ee6b3dbec0e0a58bc653d92b0a2375d49d0405448b50e1f5eb87e72" Nov 25 13:30:41 crc kubenswrapper[4688]: I1125 13:30:41.739932 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:30:41 crc kubenswrapper[4688]: E1125 13:30:41.742057 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:30:56 crc kubenswrapper[4688]: I1125 13:30:56.741244 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:30:56 crc kubenswrapper[4688]: E1125 13:30:56.742413 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:31:08 crc kubenswrapper[4688]: I1125 13:31:08.739943 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:31:08 crc kubenswrapper[4688]: E1125 13:31:08.745566 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:31:22 crc kubenswrapper[4688]: I1125 13:31:22.740450 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:31:22 crc kubenswrapper[4688]: E1125 13:31:22.741401 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:31:37 crc kubenswrapper[4688]: I1125 13:31:37.739788 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:31:37 crc kubenswrapper[4688]: E1125 13:31:37.740577 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:31:50 crc kubenswrapper[4688]: I1125 13:31:50.755831 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:31:50 crc kubenswrapper[4688]: E1125 13:31:50.757366 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:32:03 crc kubenswrapper[4688]: I1125 13:32:03.740264 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:32:03 crc kubenswrapper[4688]: E1125 13:32:03.741147 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:32:18 crc kubenswrapper[4688]: I1125 13:32:18.740218 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:32:18 crc kubenswrapper[4688]: E1125 13:32:18.740986 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:32:32 crc kubenswrapper[4688]: I1125 13:32:32.741218 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:32:32 crc kubenswrapper[4688]: E1125 13:32:32.742720 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:32:44 crc kubenswrapper[4688]: I1125 13:32:44.740792 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:32:44 crc kubenswrapper[4688]: E1125 13:32:44.741586 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:32:56 crc kubenswrapper[4688]: I1125 13:32:56.740508 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:32:56 crc kubenswrapper[4688]: E1125 13:32:56.742863 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:33:07 crc kubenswrapper[4688]: I1125 13:33:07.740690 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:33:07 crc kubenswrapper[4688]: E1125 13:33:07.741462 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:33:20 crc kubenswrapper[4688]: I1125 13:33:20.748989 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:33:20 crc kubenswrapper[4688]: E1125 13:33:20.749816 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.958492 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lwwdh"] Nov 25 13:33:34 crc kubenswrapper[4688]: E1125 13:33:34.959631 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffce485-f876-4675-91df-28d6832c6604" containerName="copy" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.959648 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffce485-f876-4675-91df-28d6832c6604" containerName="copy" Nov 25 13:33:34 crc kubenswrapper[4688]: E1125 13:33:34.959682 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab21c74-bf6d-46c5-8a73-34f0f5cafd35" containerName="collect-profiles" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.959691 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab21c74-bf6d-46c5-8a73-34f0f5cafd35" containerName="collect-profiles" Nov 25 13:33:34 crc kubenswrapper[4688]: E1125 13:33:34.959725 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffce485-f876-4675-91df-28d6832c6604" containerName="gather" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.959734 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffce485-f876-4675-91df-28d6832c6604" containerName="gather" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.959998 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffce485-f876-4675-91df-28d6832c6604" containerName="copy" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.960023 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffce485-f876-4675-91df-28d6832c6604" containerName="gather" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.960043 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab21c74-bf6d-46c5-8a73-34f0f5cafd35" containerName="collect-profiles" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.962615 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:34 crc kubenswrapper[4688]: I1125 13:33:34.983420 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lwwdh"] Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.147454 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-catalog-content\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.147803 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-utilities\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.147891 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5gjb\" (UniqueName: \"kubernetes.io/projected/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-kube-api-access-c5gjb\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.250046 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-utilities\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.250098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5gjb\" (UniqueName: \"kubernetes.io/projected/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-kube-api-access-c5gjb\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.250298 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-catalog-content\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.250631 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-utilities\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.250915 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-catalog-content\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.267020 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5gjb\" (UniqueName: \"kubernetes.io/projected/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-kube-api-access-c5gjb\") pod \"redhat-operators-lwwdh\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.290226 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.740732 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:33:35 crc kubenswrapper[4688]: E1125 13:33:35.741188 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:33:35 crc kubenswrapper[4688]: I1125 13:33:35.844226 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lwwdh"] Nov 25 13:33:36 crc kubenswrapper[4688]: I1125 13:33:36.849218 4688 generic.go:334] "Generic (PLEG): container finished" podID="c5df6410-72e2-4e65-b5c7-8d99f3f02cb5" containerID="ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0" exitCode=0 Nov 25 13:33:36 crc kubenswrapper[4688]: I1125 13:33:36.849508 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwwdh" event={"ID":"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5","Type":"ContainerDied","Data":"ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0"} Nov 25 13:33:36 crc kubenswrapper[4688]: I1125 13:33:36.849915 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwwdh" event={"ID":"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5","Type":"ContainerStarted","Data":"d8743487dd0fe797e6fa26bdb9cd92fc52d9207ff6d5250133c4976141003b81"} Nov 25 13:33:36 crc kubenswrapper[4688]: I1125 13:33:36.852554 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 13:33:38 crc kubenswrapper[4688]: I1125 13:33:38.872908 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwwdh" event={"ID":"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5","Type":"ContainerStarted","Data":"5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8"} Nov 25 13:33:43 crc kubenswrapper[4688]: I1125 13:33:43.926098 4688 generic.go:334] "Generic (PLEG): container finished" podID="c5df6410-72e2-4e65-b5c7-8d99f3f02cb5" containerID="5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8" exitCode=0 Nov 25 13:33:43 crc kubenswrapper[4688]: I1125 13:33:43.926179 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwwdh" event={"ID":"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5","Type":"ContainerDied","Data":"5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8"} Nov 25 13:33:45 crc kubenswrapper[4688]: I1125 13:33:45.950502 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwwdh" event={"ID":"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5","Type":"ContainerStarted","Data":"f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e"} Nov 25 13:33:45 crc kubenswrapper[4688]: I1125 13:33:45.991724 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lwwdh" podStartSLOduration=3.87626908 podStartE2EDuration="11.991701612s" podCreationTimestamp="2025-11-25 13:33:34 +0000 UTC" firstStartedPulling="2025-11-25 13:33:36.852075661 +0000 UTC m=+4766.961704549" lastFinishedPulling="2025-11-25 13:33:44.967508213 +0000 UTC m=+4775.077137081" observedRunningTime="2025-11-25 13:33:45.980987275 +0000 UTC m=+4776.090616153" watchObservedRunningTime="2025-11-25 13:33:45.991701612 +0000 UTC m=+4776.101330490" Nov 25 13:33:49 crc kubenswrapper[4688]: I1125 13:33:49.740393 4688 scope.go:117] "RemoveContainer" containerID="d6e811f65e4f798d6db0dbfeddca919d11edbc11538396c767cf238ed9cb70e4" Nov 25 13:33:49 crc kubenswrapper[4688]: E1125 13:33:49.741099 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6pql6_openshift-machine-config-operator(c1fd8b76-41b5-4979-be54-9c7441c21aca)\"" pod="openshift-machine-config-operator/machine-config-daemon-6pql6" podUID="c1fd8b76-41b5-4979-be54-9c7441c21aca" Nov 25 13:33:55 crc kubenswrapper[4688]: I1125 13:33:55.291232 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:55 crc kubenswrapper[4688]: I1125 13:33:55.291762 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:55 crc kubenswrapper[4688]: I1125 13:33:55.350781 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:56 crc kubenswrapper[4688]: I1125 13:33:56.149227 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:56 crc kubenswrapper[4688]: I1125 13:33:56.199041 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lwwdh"] Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.112485 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lwwdh" podUID="c5df6410-72e2-4e65-b5c7-8d99f3f02cb5" containerName="registry-server" containerID="cri-o://f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e" gracePeriod=2 Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.585805 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.718949 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5gjb\" (UniqueName: \"kubernetes.io/projected/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-kube-api-access-c5gjb\") pod \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.719910 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-utilities\") pod \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.719978 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-catalog-content\") pod \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\" (UID: \"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5\") " Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.721345 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-utilities" (OuterVolumeSpecName: "utilities") pod "c5df6410-72e2-4e65-b5c7-8d99f3f02cb5" (UID: "c5df6410-72e2-4e65-b5c7-8d99f3f02cb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.726976 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-kube-api-access-c5gjb" (OuterVolumeSpecName: "kube-api-access-c5gjb") pod "c5df6410-72e2-4e65-b5c7-8d99f3f02cb5" (UID: "c5df6410-72e2-4e65-b5c7-8d99f3f02cb5"). InnerVolumeSpecName "kube-api-access-c5gjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.822723 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5gjb\" (UniqueName: \"kubernetes.io/projected/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-kube-api-access-c5gjb\") on node \"crc\" DevicePath \"\"" Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.823058 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.828026 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5df6410-72e2-4e65-b5c7-8d99f3f02cb5" (UID: "c5df6410-72e2-4e65-b5c7-8d99f3f02cb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 13:33:58 crc kubenswrapper[4688]: I1125 13:33:58.925012 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5df6410-72e2-4e65-b5c7-8d99f3f02cb5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.126319 4688 generic.go:334] "Generic (PLEG): container finished" podID="c5df6410-72e2-4e65-b5c7-8d99f3f02cb5" containerID="f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e" exitCode=0 Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.126374 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwwdh" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.126389 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwwdh" event={"ID":"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5","Type":"ContainerDied","Data":"f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e"} Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.128150 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwwdh" event={"ID":"c5df6410-72e2-4e65-b5c7-8d99f3f02cb5","Type":"ContainerDied","Data":"d8743487dd0fe797e6fa26bdb9cd92fc52d9207ff6d5250133c4976141003b81"} Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.128196 4688 scope.go:117] "RemoveContainer" containerID="f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.157161 4688 scope.go:117] "RemoveContainer" containerID="5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.179198 4688 scope.go:117] "RemoveContainer" containerID="ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.213626 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lwwdh"] Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.234071 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lwwdh"] Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.239415 4688 scope.go:117] "RemoveContainer" containerID="f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e" Nov 25 13:33:59 crc kubenswrapper[4688]: E1125 13:33:59.240813 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e\": container with ID starting with f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e not found: ID does not exist" containerID="f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.240861 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e"} err="failed to get container status \"f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e\": rpc error: code = NotFound desc = could not find container \"f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e\": container with ID starting with f3c40b0587f2463cda8f8fc2834efca4c8b73a29a9cfd61969578fb07f19532e not found: ID does not exist" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.240890 4688 scope.go:117] "RemoveContainer" containerID="5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8" Nov 25 13:33:59 crc kubenswrapper[4688]: E1125 13:33:59.242745 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8\": container with ID starting with 5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8 not found: ID does not exist" containerID="5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.242774 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8"} err="failed to get container status \"5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8\": rpc error: code = NotFound desc = could not find container \"5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8\": container with ID starting with 5dca7d149d0b72ad28f04f32e655a1c4e4a1bcd5286629871de031df3a797da8 not found: ID does not exist" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.242788 4688 scope.go:117] "RemoveContainer" containerID="ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0" Nov 25 13:33:59 crc kubenswrapper[4688]: E1125 13:33:59.242992 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0\": container with ID starting with ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0 not found: ID does not exist" containerID="ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0" Nov 25 13:33:59 crc kubenswrapper[4688]: I1125 13:33:59.243017 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0"} err="failed to get container status \"ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0\": rpc error: code = NotFound desc = could not find container \"ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0\": container with ID starting with ab2fe64c991d514ee1a28893462631438bd2d6760a1d0ec3b44e151210b345c0 not found: ID does not exist"